Commit 35271227 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'dmaengine-4.3-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "This time we have aded a new capability for scatter-gathered memset
  using dmaengine APIs.  This is supported in xdmac & hdmac drivers

  We have added support for reusing descriptors for examples like video
  buffers etc.  Driver will follow

  The behaviour of descriptor ack has been clarified and documented

  New devices added are:
   - dma controller in sun[457]i SoCs
   - lpc18xx dmamux
   - ZTE ZX296702 dma controller
   - Analog Devices AXI-DMAC DMA controller
   - eDMA support for dma-crossbar
   - imx6sx support in imx-sdma driver
   - imx-sdma device to device support

  Other:
   - jz4780 fixes
   - ioatdma large refactor and cleanup for removal of ioat v1 and v2
     which is deprecated and fixes
   - ACPI support in X-Gene DMA engine driver
   - ipu irq fixes
   - mvxor fixes
   - minor fixes spread thru drivers"

[ The Kconfig and Makefile entries got re-sorted alphabetically, and I
  handled the conflict with the new Intel integrated IDMA driver by
  slightly mis-sorting it on purpose: "IDMA64" got sorted after "IMX" in
  order to keep the Intel entries together.  I think it might be a good
  idea to just rename the IDMA64 config entry to INTEL_IDMA64 to make
  the sorting be a true sort, not this mismash.

  Also, this merge disables the COMPILE_TEST for the sun4i DMA
  controller, because it does not compile cleanly at all.     - Linus ]

* tag 'dmaengine-4.3-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (89 commits)
  dmaengine: ioatdma: add Broadwell EP ioatdma PCI dev IDs
  dmaengine :ipu: change ipu_irq_handler() to remove compile warning
  dmaengine: ioatdma: Fix variable array length
  dmaengine: ioatdma: fix sparse "error" with prep lock
  dmaengine: hdmac: Add memset capabilities
  dmaengine: sort the sh Makefile
  dmaengine: sort the sh Kconfig
  dmaengine: sort the dw Kconfig
  dmaengine: sort the Kconfig
  dmaengine: sort the makefile
  drivers/dma: make mv_xor.c driver explicitly non-modular
  dmaengine: Add support for the Analog Devices AXI-DMAC DMA controller
  devicetree: Add bindings documentation for Analog Devices AXI-DMAC
  dmaengine: xgene-dma: Fix the lock to allow client for further submission of requests
  dmaengine: ioatdma: fix coccinelle warning
  dmaengine: ioatdma: fix zero day warning on incompatible pointer type
  dmaengine: tegra-apb: Simplify locking for device using global pause
  dmaengine: tegra-apb: Remove unnecessary return statements and variables
  dmaengine: tegra-apb: Avoid unnecessary channel base address calculation
  dmaengine: tegra-apb: Remove unused variables
  ...
parents 88a99886 ab98193d
Analog Device AXI-DMAC DMA controller
Required properties:
- compatible: Must be "adi,axi-dmac-1.00.a".
- reg: Specification for the controllers memory mapped register map.
- interrupts: Specification for the controllers interrupt.
- clocks: Phandle and specifier to the controllers AXI interface clock
- #dma-cells: Must be 1.
Required sub-nodes:
- adi,channels: This sub-node must contain a sub-node for each DMA channel. For
the channel sub-nodes the following bindings apply. They must match the
configuration options of the peripheral as it was instantiated.
Required properties for adi,channels sub-node:
- #size-cells: Must be 0
- #address-cells: Must be 1
Required channel sub-node properties:
- reg: Which channel this node refers to.
- adi,length-width: Width of the DMA transfer length register.
- adi,source-bus-width,
adi,destination-bus-width: Width of the source or destination bus in bits.
- adi,source-bus-type,
adi,destination-bus-type: Type of the source or destination bus. Must be one
of the following:
0 (AXI_DMAC_TYPE_AXI_MM): Memory mapped AXI interface
1 (AXI_DMAC_TYPE_AXI_STREAM): Streaming AXI interface
2 (AXI_DMAC_TYPE_AXI_FIFO): FIFO interface
Optional channel properties:
- adi,cyclic: Must be set if the channel supports hardware cyclic DMA
transfers.
- adi,2d: Must be set if the channel supports hardware 2D DMA transfers.
DMA clients connected to the AXI-DMAC DMA controller must use the format
described in the dma.txt file using a one-cell specifier. The value of the
specifier refers to the DMA channel index.
Example:
dma: dma@7c420000 {
compatible = "adi,axi-dmac-1.00.a";
reg = <0x7c420000 0x10000>;
interrupts = <0 57 0>;
clocks = <&clkc 16>;
#dma-cells = <1>;
adi,channels {
#size-cells = <0>;
#address-cells = <1>;
dma-channel@0 {
reg = <0>;
adi,source-bus-width = <32>;
adi,source-bus-type = <ADI_AXI_DMAC_TYPE_MM_AXI>;
adi,destination-bus-width = <64>;
adi,destination-bus-type = <ADI_AXI_DMAC_TYPE_FIFO>;
};
};
};
* ARM PrimeCells PL080 and PL081 and derivatives DMA controller
Required properties:
- compatible: "arm,pl080", "arm,primecell";
"arm,pl081", "arm,primecell";
- reg: Address range of the PL08x registers
- interrupt: The PL08x interrupt number
- clocks: The clock running the IP core clock
- clock-names: Must contain "apb_pclk"
- lli-bus-interface-ahb1: if AHB master 1 is eligible for fetching LLIs
- lli-bus-interface-ahb2: if AHB master 2 is eligible for fetching LLIs
- mem-bus-interface-ahb1: if AHB master 1 is eligible for fetching memory contents
- mem-bus-interface-ahb2: if AHB master 2 is eligible for fetching memory contents
- #dma-cells: must be <2>. First cell should contain the DMA request,
second cell should contain either 1 or 2 depending on
which AHB master that is used.
Optional properties:
- dma-channels: contains the total number of DMA channels supported by the DMAC
- dma-requests: contains the total number of DMA requests supported by the DMAC
- memcpy-burst-size: the size of the bursts for memcpy: 1, 4, 8, 16, 32
64, 128 or 256 bytes are legal values
- memcpy-bus-width: the bus width used for memcpy: 8, 16 or 32 are legal
values
Clients
Required properties:
- dmas: List of DMA controller phandle, request channel and AHB master id
- dma-names: Names of the aforementioned requested channels
Example:
dmac0: dma-controller@10130000 {
compatible = "arm,pl080", "arm,primecell";
reg = <0x10130000 0x1000>;
interrupt-parent = <&vica>;
interrupts = <15>;
clocks = <&hclkdma0>;
clock-names = "apb_pclk";
lli-bus-interface-ahb1;
lli-bus-interface-ahb2;
mem-bus-interface-ahb2;
memcpy-burst-size = <256>;
memcpy-bus-width = <32>;
#dma-cells = <2>;
};
device@40008000 {
...
dmas = <&dmac0 0 2
&dmac0 1 2>;
dma-names = "tx", "rx";
...
};
NXP LPC18xx/43xx DMA MUX (DMA request router)
Required properties:
- compatible: "nxp,lpc1850-dmamux"
- reg: Memory map for accessing module
- #dma-cells: Should be set to <3>.
* 1st cell contain the master dma request signal
* 2nd cell contain the mux value (0-3) for the peripheral
* 3rd cell contain either 1 or 2 depending on the AHB
master used.
- dma-requests: Number of DMA requests for the mux
- dma-masters: phandle pointing to the DMA controller
The DMA controller node need to have the following poroperties:
- dma-requests: Number of DMA requests the controller can handle
Example:
dmac: dma@40002000 {
compatible = "nxp,lpc1850-gpdma", "arm,pl080", "arm,primecell";
arm,primecell-periphid = <0x00041080>;
reg = <0x40002000 0x1000>;
interrupts = <2>;
clocks = <&ccu1 CLK_CPU_DMA>;
clock-names = "apb_pclk";
#dma-cells = <2>;
dma-channels = <8>;
dma-requests = <16>;
lli-bus-interface-ahb1;
lli-bus-interface-ahb2;
mem-bus-interface-ahb1;
mem-bus-interface-ahb2;
memcpy-burst-size = <256>;
memcpy-bus-width = <32>;
};
dmamux: dma-mux {
compatible = "nxp,lpc1850-dmamux";
#dma-cells = <3>;
dma-requests = <64>;
dma-masters = <&dmac>;
};
uart0: serial@40081000 {
compatible = "nxp,lpc1850-uart", "ns16550a";
reg = <0x40081000 0x1000>;
reg-shift = <2>;
interrupts = <24>;
clocks = <&ccu2 CLK_APB0_UART0>, <&ccu1 CLK_CPU_UART0>;
clock-names = "uartclk", "reg";
dmas = <&dmamux 1 1 2
&dmamux 2 1 2>;
dma-names = "tx", "rx";
};
...@@ -12,10 +12,13 @@ XOR engine has. Those sub-nodes have the following required ...@@ -12,10 +12,13 @@ XOR engine has. Those sub-nodes have the following required
properties: properties:
- interrupts: interrupt of the XOR channel - interrupts: interrupt of the XOR channel
And the following optional properties: The sub-nodes used to contain one or several of the following
properties, but they are now deprecated:
- dmacap,memcpy to indicate that the XOR channel is capable of memcpy operations - dmacap,memcpy to indicate that the XOR channel is capable of memcpy operations
- dmacap,memset to indicate that the XOR channel is capable of memset operations - dmacap,memset to indicate that the XOR channel is capable of memset operations
- dmacap,xor to indicate that the XOR channel is capable of xor operations - dmacap,xor to indicate that the XOR channel is capable of xor operations
- dmacap,interrupt to indicate that the XOR channel is capable of
generating interrupts
Example: Example:
...@@ -28,13 +31,8 @@ xor@d0060900 { ...@@ -28,13 +31,8 @@ xor@d0060900 {
xor00 { xor00 {
interrupts = <51>; interrupts = <51>;
dmacap,memcpy;
dmacap,xor;
}; };
xor01 { xor01 {
interrupts = <52>; interrupts = <52>;
dmacap,memcpy;
dmacap,xor;
dmacap,memset;
}; };
}; };
Allwinner A10 DMA Controller
This driver follows the generic DMA bindings defined in dma.txt.
Required properties:
- compatible: Must be "allwinner,sun4i-a10-dma"
- reg: Should contain the registers base address and length
- interrupts: Should contain a reference to the interrupt used by this device
- clocks: Should contain a reference to the parent AHB clock
- #dma-cells : Should be 2, first cell denoting normal or dedicated dma,
second cell holding the request line number.
Example:
dma: dma-controller@01c02000 {
compatible = "allwinner,sun4i-a10-dma";
reg = <0x01c02000 0x1000>;
interrupts = <27>;
clocks = <&ahb_gates 6>;
#dma-cells = <2>;
};
Clients:
DMA clients connected to the Allwinner A10 DMA controller must use the
format described in the dma.txt file, using a three-cell specifier for
each channel: a phandle plus two integer cells.
The three cells in order are:
1. A phandle pointing to the DMA controller.
2. Whether it is using normal (0) or dedicated (1) channels
3. The port ID as specified in the datasheet
Example:
spi2: spi@01c17000 {
compatible = "allwinner,sun4i-a10-spi";
reg = <0x01c17000 0x1000>;
interrupts = <0 12 4>;
clocks = <&ahb_gates 22>, <&spi2_clk>;
clock-names = "ahb", "mod";
dmas = <&dma 1 29>, <&dma 1 28>;
dma-names = "rx", "tx";
status = "disabled";
#address-cells = <1>;
#size-cells = <0>;
};
* ZTE ZX296702 DMA controller
Required properties:
- compatible: Should be "zte,zx296702-dma"
- reg: Should contain DMA registers location and length.
- interrupts: Should contain one interrupt shared by all channel
- #dma-cells: see dma.txt, should be 1, para number
- dma-channels: physical channels supported
- dma-requests: virtual channels supported, each virtual channel
have specific request line
- clocks: clock required
Example:
Controller:
dma: dma-controller@0x09c00000{
compatible = "zte,zx296702-dma";
reg = <0x09c00000 0x1000>;
clocks = <&topclk ZX296702_DMA_ACLK>;
interrupts = <GIC_SPI 66 IRQ_TYPE_LEVEL_HIGH>;
#dma-cells = <1>;
dma-channels = <24>;
dma-requests = <24>;
};
Client:
Use specific request line passing from dmax
For example, spdif0 tx channel request line is 4
spdif0: spdif0@0b004000 {
#sound-dai-cells = <0>;
compatible = "zte,zx296702-spdif";
reg = <0x0b004000 0x1000>;
clocks = <&lsp0clk ZX296702_SPDIF0_DIV>;
clock-names = "tx";
interrupts = <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>;
dmas = <&dma 4>;
dma-names = "tx";
}
...@@ -345,12 +345,29 @@ where to put them) ...@@ -345,12 +345,29 @@ where to put them)
that abstracts it away. that abstracts it away.
* DMA_CTRL_ACK * DMA_CTRL_ACK
- If set, the transfer can be reused after being completed. - If clear, the descriptor cannot be reused by provider until the
- There is a guarantee the transfer won't be freed until it is acked client acknowledges receipt, i.e. has has a chance to establish any
by async_tx_ack(). dependency chains
- This can be acked by invoking async_tx_ack()
- If set, does not mean descriptor can be reused
* DMA_CTRL_REUSE
- If set, the descriptor can be reused after being completed. It should
not be freed by provider if this flag is set.
- The descriptor should be prepared for reuse by invoking
dmaengine_desc_set_reuse() which will set DMA_CTRL_REUSE.
- dmaengine_desc_set_reuse() will succeed only when channel support
reusable descriptor as exhibited by capablities
- As a consequence, if a device driver wants to skip the dma_map_sg() and - As a consequence, if a device driver wants to skip the dma_map_sg() and
dma_unmap_sg() in between 2 transfers, because the DMA'd data wasn't used, dma_unmap_sg() in between 2 transfers, because the DMA'd data wasn't used,
it can resubmit the transfer right after its completion. it can resubmit the transfer right after its completion.
- Descriptor can be freed in few ways
- Clearing DMA_CTRL_REUSE by invoking dmaengine_desc_clear_reuse()
and submitting for last txn
- Explicitly invoking dmaengine_desc_free(), this can succeed only
when DMA_CTRL_REUSE is already set
- Terminating the channel
General Design Notes General Design Notes
-------------------- --------------------
......
...@@ -735,6 +735,12 @@ X: drivers/iio/*/adjd* ...@@ -735,6 +735,12 @@ X: drivers/iio/*/adjd*
F: drivers/staging/iio/*/ad* F: drivers/staging/iio/*/ad*
F: staging/iio/trigger/iio-trig-bfin-timer.c F: staging/iio/trigger/iio-trig-bfin-timer.c
ANALOG DEVICES INC DMA DRIVERS
M: Lars-Peter Clausen <lars@metafoo.de>
W: http://ez.analog.com/community/linux-device-drivers
S: Supported
F: drivers/dma/dma-axi-dmac.c
ANDROID DRIVERS ANDROID DRIVERS
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
M: Arve Hjønnevåg <arve@android.com> M: Arve Hjønnevåg <arve@android.com>
......
This diff is collapsed.
#dmaengine debug flags
subdir-ccflags-$(CONFIG_DMADEVICES_DEBUG) := -DDEBUG subdir-ccflags-$(CONFIG_DMADEVICES_DEBUG) := -DDEBUG
subdir-ccflags-$(CONFIG_DMADEVICES_VDEBUG) += -DVERBOSE_DEBUG subdir-ccflags-$(CONFIG_DMADEVICES_VDEBUG) += -DVERBOSE_DEBUG
#core
obj-$(CONFIG_DMA_ENGINE) += dmaengine.o obj-$(CONFIG_DMA_ENGINE) += dmaengine.o
obj-$(CONFIG_DMA_VIRTUAL_CHANNELS) += virt-dma.o obj-$(CONFIG_DMA_VIRTUAL_CHANNELS) += virt-dma.o
obj-$(CONFIG_DMA_ACPI) += acpi-dma.o obj-$(CONFIG_DMA_ACPI) += acpi-dma.o
obj-$(CONFIG_DMA_OF) += of-dma.o obj-$(CONFIG_DMA_OF) += of-dma.o
#dmatest
obj-$(CONFIG_DMATEST) += dmatest.o obj-$(CONFIG_DMATEST) += dmatest.o
obj-$(CONFIG_INTEL_IOATDMA) += ioat/
obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o #devices
obj-$(CONFIG_FSL_DMA) += fsldma.o obj-$(CONFIG_AMBA_PL08X) += amba-pl08x.o
obj-$(CONFIG_HSU_DMA) += hsu/ obj-$(CONFIG_AMCC_PPC440SPE_ADMA) += ppc4xx/
obj-$(CONFIG_MPC512X_DMA) += mpc512x_dma.o
obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
obj-$(CONFIG_MV_XOR) += mv_xor.o
obj-$(CONFIG_IDMA64) += idma64.o
obj-$(CONFIG_DW_DMAC_CORE) += dw/
obj-$(CONFIG_AT_HDMAC) += at_hdmac.o obj-$(CONFIG_AT_HDMAC) += at_hdmac.o
obj-$(CONFIG_AT_XDMAC) += at_xdmac.o obj-$(CONFIG_AT_XDMAC) += at_xdmac.o
obj-$(CONFIG_MX3_IPU) += ipu/ obj-$(CONFIG_AXI_DMAC) += dma-axi-dmac.o
obj-$(CONFIG_TXX9_DMAC) += txx9dmac.o
obj-$(CONFIG_RENESAS_DMA) += sh/
obj-$(CONFIG_COH901318) += coh901318.o coh901318_lli.o obj-$(CONFIG_COH901318) += coh901318.o coh901318_lli.o
obj-$(CONFIG_AMCC_PPC440SPE_ADMA) += ppc4xx/ obj-$(CONFIG_DMA_BCM2835) += bcm2835-dma.o
obj-$(CONFIG_IMX_SDMA) += imx-sdma.o obj-$(CONFIG_DMA_JZ4740) += dma-jz4740.o
obj-$(CONFIG_DMA_JZ4780) += dma-jz4780.o
obj-$(CONFIG_DMA_OMAP) += omap-dma.o
obj-$(CONFIG_DMA_SA11X0) += sa11x0-dma.o
obj-$(CONFIG_DMA_SUN4I) += sun4i-dma.o
obj-$(CONFIG_DMA_SUN6I) += sun6i-dma.o
obj-$(CONFIG_DW_DMAC_CORE) += dw/
obj-$(CONFIG_EP93XX_DMA) += ep93xx_dma.o
obj-$(CONFIG_FSL_DMA) += fsldma.o
obj-$(CONFIG_FSL_EDMA) += fsl-edma.o
obj-$(CONFIG_FSL_RAID) += fsl_raid.o
obj-$(CONFIG_HSU_DMA) += hsu/
obj-$(CONFIG_IMG_MDC_DMA) += img-mdc-dma.o
obj-$(CONFIG_IMX_DMA) += imx-dma.o obj-$(CONFIG_IMX_DMA) += imx-dma.o
obj-$(CONFIG_IMX_SDMA) += imx-sdma.o
obj-$(CONFIG_IDMA64) += idma64.o
obj-$(CONFIG_INTEL_IOATDMA) += ioat/
obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o
obj-$(CONFIG_INTEL_MIC_X100_DMA) += mic_x100_dma.o
obj-$(CONFIG_K3_DMA) += k3dma.o
obj-$(CONFIG_LPC18XX_DMAMUX) += lpc18xx-dmamux.o
obj-$(CONFIG_MMP_PDMA) += mmp_pdma.o
obj-$(CONFIG_MMP_TDMA) += mmp_tdma.o
obj-$(CONFIG_MOXART_DMA) += moxart-dma.o
obj-$(CONFIG_MPC512X_DMA) += mpc512x_dma.o
obj-$(CONFIG_MV_XOR) += mv_xor.o
obj-$(CONFIG_MXS_DMA) += mxs-dma.o obj-$(CONFIG_MXS_DMA) += mxs-dma.o
obj-$(CONFIG_MX3_IPU) += ipu/
obj-$(CONFIG_NBPFAXI_DMA) += nbpfaxi.o
obj-$(CONFIG_PCH_DMA) += pch_dma.o
obj-$(CONFIG_PL330_DMA) += pl330.o
obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
obj-$(CONFIG_PXA_DMA) += pxa_dma.o obj-$(CONFIG_PXA_DMA) += pxa_dma.o
obj-$(CONFIG_TIMB_DMA) += timb_dma.o obj-$(CONFIG_QCOM_BAM_DMA) += qcom_bam_dma.o
obj-$(CONFIG_RENESAS_DMA) += sh/
obj-$(CONFIG_SIRF_DMA) += sirf-dma.o obj-$(CONFIG_SIRF_DMA) += sirf-dma.o
obj-$(CONFIG_TI_EDMA) += edma.o
obj-$(CONFIG_STE_DMA40) += ste_dma40.o ste_dma40_ll.o obj-$(CONFIG_STE_DMA40) += ste_dma40.o ste_dma40_ll.o
obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o
obj-$(CONFIG_S3C24XX_DMAC) += s3c24xx-dma.o obj-$(CONFIG_S3C24XX_DMAC) += s3c24xx-dma.o
obj-$(CONFIG_PL330_DMA) += pl330.o obj-$(CONFIG_TXX9_DMAC) += txx9dmac.o
obj-$(CONFIG_PCH_DMA) += pch_dma.o obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o
obj-$(CONFIG_AMBA_PL08X) += amba-pl08x.o obj-$(CONFIG_TIMB_DMA) += timb_dma.o
obj-$(CONFIG_EP93XX_DMA) += ep93xx_dma.o
obj-$(CONFIG_DMA_SA11X0) += sa11x0-dma.o
obj-$(CONFIG_MMP_TDMA) += mmp_tdma.o
obj-$(CONFIG_DMA_OMAP) += omap-dma.o
obj-$(CONFIG_TI_DMA_CROSSBAR) += ti-dma-crossbar.o
obj-$(CONFIG_DMA_BCM2835) += bcm2835-dma.o
obj-$(CONFIG_MMP_PDMA) += mmp_pdma.o
obj-$(CONFIG_DMA_JZ4740) += dma-jz4740.o
obj-$(CONFIG_DMA_JZ4780) += dma-jz4780.o
obj-$(CONFIG_TI_CPPI41) += cppi41.o obj-$(CONFIG_TI_CPPI41) += cppi41.o
obj-$(CONFIG_K3_DMA) += k3dma.o obj-$(CONFIG_TI_DMA_CROSSBAR) += ti-dma-crossbar.o
obj-$(CONFIG_MOXART_DMA) += moxart-dma.o obj-$(CONFIG_TI_EDMA) += edma.o
obj-$(CONFIG_FSL_RAID) += fsl_raid.o
obj-$(CONFIG_FSL_EDMA) += fsl-edma.o
obj-$(CONFIG_QCOM_BAM_DMA) += qcom_bam_dma.o
obj-y += xilinx/
obj-$(CONFIG_INTEL_MIC_X100_DMA) += mic_x100_dma.o
obj-$(CONFIG_NBPFAXI_DMA) += nbpfaxi.o
obj-$(CONFIG_DMA_SUN6I) += sun6i-dma.o
obj-$(CONFIG_IMG_MDC_DMA) += img-mdc-dma.o
obj-$(CONFIG_XGENE_DMA) += xgene-dma.o obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
obj-$(CONFIG_ZX_DMA) += zx296702_dma.o
obj-y += xilinx/
...@@ -83,6 +83,8 @@ ...@@ -83,6 +83,8 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/of.h>
#include <linux/of_dma.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/slab.h> #include <linux/slab.h>
...@@ -2030,10 +2032,188 @@ static inline void init_pl08x_debugfs(struct pl08x_driver_data *pl08x) ...@@ -2030,10 +2032,188 @@ static inline void init_pl08x_debugfs(struct pl08x_driver_data *pl08x)
} }
#endif #endif
#ifdef CONFIG_OF
static struct dma_chan *pl08x_find_chan_id(struct pl08x_driver_data *pl08x,
u32 id)
{
struct pl08x_dma_chan *chan;
list_for_each_entry(chan, &pl08x->slave.channels, vc.chan.device_node) {
if (chan->signal == id)
return &chan->vc.chan;
}
return NULL;
}
static struct dma_chan *pl08x_of_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
struct pl08x_driver_data *pl08x = ofdma->of_dma_data;
struct pl08x_channel_data *data;
struct pl08x_dma_chan *chan;
struct dma_chan *dma_chan;
if (!pl08x)
return NULL;
if (dma_spec->args_count != 2)
return NULL;
dma_chan = pl08x_find_chan_id(pl08x, dma_spec->args[0]);
if (dma_chan)
return dma_get_slave_channel(dma_chan);
chan = devm_kzalloc(pl08x->slave.dev, sizeof(*chan) + sizeof(*data),
GFP_KERNEL);
if (!chan)
return NULL;
data = (void *)&chan[1];
data->bus_id = "(none)";
data->periph_buses = dma_spec->args[1];
chan->cd = data;
chan->host = pl08x;
chan->slave = true;
chan->name = data->bus_id;
chan->state = PL08X_CHAN_IDLE;
chan->signal = dma_spec->args[0];
chan->vc.desc_free = pl08x_desc_free;
vchan_init(&chan->vc, &pl08x->slave);
return dma_get_slave_channel(&chan->vc.chan);
}
static int pl08x_of_probe(struct amba_device *adev,
struct pl08x_driver_data *pl08x,
struct device_node *np)
{
struct pl08x_platform_data *pd;
u32 cctl_memcpy = 0;
u32 val;
int ret;
pd = devm_kzalloc(&adev->dev, sizeof(*pd), GFP_KERNEL);
if (!pd)
return -ENOMEM;
/* Eligible bus masters for fetching LLIs */
if (of_property_read_bool(np, "lli-bus-interface-ahb1"))
pd->lli_buses |= PL08X_AHB1;
if (of_property_read_bool(np, "lli-bus-interface-ahb2"))
pd->lli_buses |= PL08X_AHB2;
if (!pd->lli_buses) {
dev_info(&adev->dev, "no bus masters for LLIs stated, assume all\n");
pd->lli_buses |= PL08X_AHB1 | PL08X_AHB2;
}
/* Eligible bus masters for memory access */
if (of_property_read_bool(np, "mem-bus-interface-ahb1"))
pd->mem_buses |= PL08X_AHB1;
if (of_property_read_bool(np, "mem-bus-interface-ahb2"))
pd->mem_buses |= PL08X_AHB2;
if (!pd->mem_buses) {
dev_info(&adev->dev, "no bus masters for memory stated, assume all\n");
pd->mem_buses |= PL08X_AHB1 | PL08X_AHB2;
}
/* Parse the memcpy channel properties */
ret = of_property_read_u32(np, "memcpy-burst-size", &val);
if (ret) {
dev_info(&adev->dev, "no memcpy burst size specified, using 1 byte\n");
val = 1;
}
switch (val) {
default:
dev_err(&adev->dev, "illegal burst size for memcpy, set to 1\n");
/* Fall through */
case 1:
cctl_memcpy |= PL080_BSIZE_1 << PL080_CONTROL_SB_SIZE_SHIFT |
PL080_BSIZE_1 << PL080_CONTROL_DB_SIZE_SHIFT;
break;
case 4:
cctl_memcpy |= PL080_BSIZE_4 << PL080_CONTROL_SB_SIZE_SHIFT |
PL080_BSIZE_4 << PL080_CONTROL_DB_SIZE_SHIFT;
break;
case 8:
cctl_memcpy |= PL080_BSIZE_8 << PL080_CONTROL_SB_SIZE_SHIFT |
PL080_BSIZE_8 << PL080_CONTROL_DB_SIZE_SHIFT;
break;
case 16:
cctl_memcpy |= PL080_BSIZE_16 << PL080_CONTROL_SB_SIZE_SHIFT |
PL080_BSIZE_16 << PL080_CONTROL_DB_SIZE_SHIFT;
break;
case 32:
cctl_memcpy |= PL080_BSIZE_32 << PL080_CONTROL_SB_SIZE_SHIFT |
PL080_BSIZE_32 << PL080_CONTROL_DB_SIZE_SHIFT;
break;
case 64:
cctl_memcpy |= PL080_BSIZE_64 << PL080_CONTROL_SB_SIZE_SHIFT |
PL080_BSIZE_64 << PL080_CONTROL_DB_SIZE_SHIFT;
break;
case 128:
cctl_memcpy |= PL080_BSIZE_128 << PL080_CONTROL_SB_SIZE_SHIFT |
PL080_BSIZE_128 << PL080_CONTROL_DB_SIZE_SHIFT;
break;
case 256:
cctl_memcpy |= PL080_BSIZE_256 << PL080_CONTROL_SB_SIZE_SHIFT |
PL080_BSIZE_256 << PL080_CONTROL_DB_SIZE_SHIFT;
break;
}
ret = of_property_read_u32(np, "memcpy-bus-width", &val);
if (ret) {
dev_info(&adev->dev, "no memcpy bus width specified, using 8 bits\n");
val = 8;
}
switch (val) {
default:
dev_err(&adev->dev, "illegal bus width for memcpy, set to 8 bits\n");
/* Fall through */
case 8:
cctl_memcpy |= PL080_WIDTH_8BIT << PL080_CONTROL_SWIDTH_SHIFT |
PL080_WIDTH_8BIT << PL080_CONTROL_DWIDTH_SHIFT;
break;
case 16:
cctl_memcpy |= PL080_WIDTH_16BIT << PL080_CONTROL_SWIDTH_SHIFT |
PL080_WIDTH_16BIT << PL080_CONTROL_DWIDTH_SHIFT;
break;
case 32:
cctl_memcpy |= PL080_WIDTH_32BIT << PL080_CONTROL_SWIDTH_SHIFT |
PL080_WIDTH_32BIT << PL080_CONTROL_DWIDTH_SHIFT;
break;
}
/* This is currently the only thing making sense */
cctl_memcpy |= PL080_CONTROL_PROT_SYS;
/* Set up memcpy channel */
pd->memcpy_channel.bus_id = "memcpy";
pd->memcpy_channel.cctl_memcpy = cctl_memcpy;
/* Use the buses that can access memory, obviously */
pd->memcpy_channel.periph_buses = pd->mem_buses;
pl08x->pd = pd;
return of_dma_controller_register(adev->dev.of_node, pl08x_of_xlate,
pl08x);
}
#else
static inline int pl08x_of_probe(struct amba_device *adev,
struct pl08x_driver_data *pl08x,
struct device_node *np)
{
return -EINVAL;
}
#endif
static int pl08x_probe(struct amba_device *adev, const struct amba_id *id) static int pl08x_probe(struct amba_device *adev, const struct amba_id *id)
{ {
struct pl08x_driver_data *pl08x; struct pl08x_driver_data *pl08x;
const struct vendor_data *vd = id->data; const struct vendor_data *vd = id->data;
struct device_node *np = adev->dev.of_node;
u32 tsfr_size; u32 tsfr_size;
int ret = 0; int ret = 0;
int i; int i;
...@@ -2093,10 +2273,16 @@ static int pl08x_probe(struct amba_device *adev, const struct amba_id *id) ...@@ -2093,10 +2273,16 @@ static int pl08x_probe(struct amba_device *adev, const struct amba_id *id)
/* Get the platform data */ /* Get the platform data */
pl08x->pd = dev_get_platdata(&adev->dev); pl08x->pd = dev_get_platdata(&adev->dev);
if (!pl08x->pd) { if (!pl08x->pd) {
if (np) {
ret = pl08x_of_probe(adev, pl08x, np);
if (ret)
goto out_no_platdata;
} else {
dev_err(&adev->dev, "no platform data supplied\n"); dev_err(&adev->dev, "no platform data supplied\n");
ret = -EINVAL; ret = -EINVAL;
goto out_no_platdata; goto out_no_platdata;
} }
}
/* Assign useful pointers to the driver state */ /* Assign useful pointers to the driver state */
pl08x->adev = adev; pl08x->adev = adev;
......
...@@ -448,6 +448,7 @@ static void ...@@ -448,6 +448,7 @@ static void
atc_chain_complete(struct at_dma_chan *atchan, struct at_desc *desc) atc_chain_complete(struct at_dma_chan *atchan, struct at_desc *desc)
{ {
struct dma_async_tx_descriptor *txd = &desc->txd; struct dma_async_tx_descriptor *txd = &desc->txd;
struct at_dma *atdma = to_at_dma(atchan->chan_common.device);
dev_vdbg(chan2dev(&atchan->chan_common), dev_vdbg(chan2dev(&atchan->chan_common),
"descriptor %u complete\n", txd->cookie); "descriptor %u complete\n", txd->cookie);
...@@ -456,6 +457,13 @@ atc_chain_complete(struct at_dma_chan *atchan, struct at_desc *desc) ...@@ -456,6 +457,13 @@ atc_chain_complete(struct at_dma_chan *atchan, struct at_desc *desc)
if (!atc_chan_is_cyclic(atchan)) if (!atc_chan_is_cyclic(atchan))
dma_cookie_complete(txd); dma_cookie_complete(txd);
/* If the transfer was a memset, free our temporary buffer */
if (desc->memset) {
dma_pool_free(atdma->memset_pool, desc->memset_vaddr,
desc->memset_paddr);
desc->memset = false;
}
/* move children to free_list */ /* move children to free_list */
list_splice_init(&desc->tx_list, &atchan->free_list); list_splice_init(&desc->tx_list, &atchan->free_list);
/* move myself to free_list */ /* move myself to free_list */
...@@ -717,14 +725,14 @@ atc_prep_dma_interleaved(struct dma_chan *chan, ...@@ -717,14 +725,14 @@ atc_prep_dma_interleaved(struct dma_chan *chan,
size_t len = 0; size_t len = 0;
int i; int i;
if (unlikely(!xt || xt->numf != 1 || !xt->frame_size))
return NULL;
dev_info(chan2dev(chan), dev_info(chan2dev(chan),
"%s: src=0x%08x, dest=0x%08x, numf=%d, frame_size=%d, flags=0x%lx\n", "%s: src=0x%08x, dest=0x%08x, numf=%d, frame_size=%d, flags=0x%lx\n",
__func__, xt->src_start, xt->dst_start, xt->numf, __func__, xt->src_start, xt->dst_start, xt->numf,
xt->frame_size, flags); xt->frame_size, flags);
if (unlikely(!xt || xt->numf != 1 || !xt->frame_size))
return NULL;
/* /*
* The controller can only "skip" X bytes every Y bytes, so we * The controller can only "skip" X bytes every Y bytes, so we
* need to make sure we are given a template that fit that * need to make sure we are given a template that fit that
...@@ -873,6 +881,93 @@ atc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, ...@@ -873,6 +881,93 @@ atc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
return NULL; return NULL;
} }
/**
* atc_prep_dma_memset - prepare a memcpy operation
* @chan: the channel to prepare operation on
* @dest: operation virtual destination address
* @value: value to set memory buffer to
* @len: operation length
* @flags: tx descriptor status flags
*/
static struct dma_async_tx_descriptor *
atc_prep_dma_memset(struct dma_chan *chan, dma_addr_t dest, int value,
size_t len, unsigned long flags)
{
struct at_dma_chan *atchan = to_at_dma_chan(chan);
struct at_dma *atdma = to_at_dma(chan->device);
struct at_desc *desc = NULL;
size_t xfer_count;
u32 ctrla;
u32 ctrlb;
dev_vdbg(chan2dev(chan), "%s: d0x%x v0x%x l0x%zx f0x%lx\n", __func__,
dest, value, len, flags);
if (unlikely(!len)) {
dev_dbg(chan2dev(chan), "%s: length is zero!\n", __func__);
return NULL;
}
if (!is_dma_fill_aligned(chan->device, dest, 0, len)) {
dev_dbg(chan2dev(chan), "%s: buffer is not aligned\n",
__func__);
return NULL;
}
xfer_count = len >> 2;
if (xfer_count > ATC_BTSIZE_MAX) {
dev_err(chan2dev(chan), "%s: buffer is too big\n",
__func__);
return NULL;
}
ctrlb = ATC_DEFAULT_CTRLB | ATC_IEN
| ATC_SRC_ADDR_MODE_FIXED
| ATC_DST_ADDR_MODE_INCR
| ATC_FC_MEM2MEM;
ctrla = ATC_SRC_WIDTH(2) |
ATC_DST_WIDTH(2);
desc = atc_desc_get(atchan);
if (!desc) {
dev_err(chan2dev(chan), "%s: can't get a descriptor\n",
__func__);
return NULL;
}
desc->memset_vaddr = dma_pool_alloc(atdma->memset_pool, GFP_ATOMIC,
&desc->memset_paddr);
if (!desc->memset_vaddr) {
dev_err(chan2dev(chan), "%s: couldn't allocate buffer\n",
__func__);
goto err_put_desc;
}
*desc->memset_vaddr = value;
desc->memset = true;
desc->lli.saddr = desc->memset_paddr;
desc->lli.daddr = dest;
desc->lli.ctrla = ctrla | xfer_count;
desc->lli.ctrlb = ctrlb;
desc->txd.cookie = -EBUSY;
desc->len = len;
desc->total_len = len;
/* set end-of-link on the descriptor */
set_desc_eol(desc);
desc->txd.flags = flags;
return &desc->txd;
err_put_desc:
atc_desc_put(atchan, desc);
return NULL;
}
/** /**
* atc_prep_slave_sg - prepare descriptors for a DMA_SLAVE transaction * atc_prep_slave_sg - prepare descriptors for a DMA_SLAVE transaction
...@@ -1755,6 +1850,8 @@ static int __init at_dma_probe(struct platform_device *pdev) ...@@ -1755,6 +1850,8 @@ static int __init at_dma_probe(struct platform_device *pdev)
dma_cap_set(DMA_SG, at91sam9rl_config.cap_mask); dma_cap_set(DMA_SG, at91sam9rl_config.cap_mask);
dma_cap_set(DMA_INTERLEAVE, at91sam9g45_config.cap_mask); dma_cap_set(DMA_INTERLEAVE, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_MEMCPY, at91sam9g45_config.cap_mask); dma_cap_set(DMA_MEMCPY, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_MEMSET, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_PRIVATE, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_SLAVE, at91sam9g45_config.cap_mask); dma_cap_set(DMA_SLAVE, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_SG, at91sam9g45_config.cap_mask); dma_cap_set(DMA_SG, at91sam9g45_config.cap_mask);
...@@ -1818,7 +1915,16 @@ static int __init at_dma_probe(struct platform_device *pdev) ...@@ -1818,7 +1915,16 @@ static int __init at_dma_probe(struct platform_device *pdev)
if (!atdma->dma_desc_pool) { if (!atdma->dma_desc_pool) {
dev_err(&pdev->dev, "No memory for descriptors dma pool\n"); dev_err(&pdev->dev, "No memory for descriptors dma pool\n");
err = -ENOMEM; err = -ENOMEM;
goto err_pool_create; goto err_desc_pool_create;
}
/* create a pool of consistent memory blocks for memset blocks */
atdma->memset_pool = dma_pool_create("at_hdmac_memset_pool",
&pdev->dev, sizeof(int), 4, 0);
if (!atdma->memset_pool) {
dev_err(&pdev->dev, "No memory for memset dma pool\n");
err = -ENOMEM;
goto err_memset_pool_create;
} }
/* clear any pending interrupt */ /* clear any pending interrupt */
...@@ -1864,6 +1970,11 @@ static int __init at_dma_probe(struct platform_device *pdev) ...@@ -1864,6 +1970,11 @@ static int __init at_dma_probe(struct platform_device *pdev)
if (dma_has_cap(DMA_MEMCPY, atdma->dma_common.cap_mask)) if (dma_has_cap(DMA_MEMCPY, atdma->dma_common.cap_mask))
atdma->dma_common.device_prep_dma_memcpy = atc_prep_dma_memcpy; atdma->dma_common.device_prep_dma_memcpy = atc_prep_dma_memcpy;
if (dma_has_cap(DMA_MEMSET, atdma->dma_common.cap_mask)) {
atdma->dma_common.device_prep_dma_memset = atc_prep_dma_memset;
atdma->dma_common.fill_align = DMAENGINE_ALIGN_4_BYTES;
}
if (dma_has_cap(DMA_SLAVE, atdma->dma_common.cap_mask)) { if (dma_has_cap(DMA_SLAVE, atdma->dma_common.cap_mask)) {
atdma->dma_common.device_prep_slave_sg = atc_prep_slave_sg; atdma->dma_common.device_prep_slave_sg = atc_prep_slave_sg;
/* controller can do slave DMA: can trigger cyclic transfers */ /* controller can do slave DMA: can trigger cyclic transfers */
...@@ -1884,8 +1995,9 @@ static int __init at_dma_probe(struct platform_device *pdev) ...@@ -1884,8 +1995,9 @@ static int __init at_dma_probe(struct platform_device *pdev)
dma_writel(atdma, EN, AT_DMA_ENABLE); dma_writel(atdma, EN, AT_DMA_ENABLE);
dev_info(&pdev->dev, "Atmel AHB DMA Controller ( %s%s%s), %d channels\n", dev_info(&pdev->dev, "Atmel AHB DMA Controller ( %s%s%s%s), %d channels\n",
dma_has_cap(DMA_MEMCPY, atdma->dma_common.cap_mask) ? "cpy " : "", dma_has_cap(DMA_MEMCPY, atdma->dma_common.cap_mask) ? "cpy " : "",
dma_has_cap(DMA_MEMSET, atdma->dma_common.cap_mask) ? "set " : "",
dma_has_cap(DMA_SLAVE, atdma->dma_common.cap_mask) ? "slave " : "", dma_has_cap(DMA_SLAVE, atdma->dma_common.cap_mask) ? "slave " : "",
dma_has_cap(DMA_SG, atdma->dma_common.cap_mask) ? "sg-cpy " : "", dma_has_cap(DMA_SG, atdma->dma_common.cap_mask) ? "sg-cpy " : "",
plat_dat->nr_channels); plat_dat->nr_channels);
...@@ -1910,8 +2022,10 @@ static int __init at_dma_probe(struct platform_device *pdev) ...@@ -1910,8 +2022,10 @@ static int __init at_dma_probe(struct platform_device *pdev)
err_of_dma_controller_register: err_of_dma_controller_register:
dma_async_device_unregister(&atdma->dma_common); dma_async_device_unregister(&atdma->dma_common);
dma_pool_destroy(atdma->memset_pool);
err_memset_pool_create:
dma_pool_destroy(atdma->dma_desc_pool); dma_pool_destroy(atdma->dma_desc_pool);
err_pool_create: err_desc_pool_create:
free_irq(platform_get_irq(pdev, 0), atdma); free_irq(platform_get_irq(pdev, 0), atdma);
err_irq: err_irq:
clk_disable_unprepare(atdma->clk); clk_disable_unprepare(atdma->clk);
...@@ -1936,6 +2050,7 @@ static int at_dma_remove(struct platform_device *pdev) ...@@ -1936,6 +2050,7 @@ static int at_dma_remove(struct platform_device *pdev)
at_dma_off(atdma); at_dma_off(atdma);
dma_async_device_unregister(&atdma->dma_common); dma_async_device_unregister(&atdma->dma_common);
dma_pool_destroy(atdma->memset_pool);
dma_pool_destroy(atdma->dma_desc_pool); dma_pool_destroy(atdma->dma_desc_pool);
free_irq(platform_get_irq(pdev, 0), atdma); free_irq(platform_get_irq(pdev, 0), atdma);
......
...@@ -200,6 +200,11 @@ struct at_desc { ...@@ -200,6 +200,11 @@ struct at_desc {
size_t boundary; size_t boundary;
size_t dst_hole; size_t dst_hole;
size_t src_hole; size_t src_hole;
/* Memset temporary buffer */
bool memset;
dma_addr_t memset_paddr;
int *memset_vaddr;
}; };
static inline struct at_desc * static inline struct at_desc *
...@@ -330,6 +335,7 @@ struct at_dma { ...@@ -330,6 +335,7 @@ struct at_dma {
u8 all_chan_mask; u8 all_chan_mask;
struct dma_pool *dma_desc_pool; struct dma_pool *dma_desc_pool;
struct dma_pool *memset_pool;
/* AT THE END channels table */ /* AT THE END channels table */
struct at_dma_chan chan[0]; struct at_dma_chan chan[0];
}; };
......
...@@ -797,10 +797,7 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr, ...@@ -797,10 +797,7 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
list_add_tail(&desc->desc_node, &first->descs_list); list_add_tail(&desc->desc_node, &first->descs_list);
} }
prev->lld.mbr_nda = first->tx_dma_desc.phys; at_xdmac_queue_desc(chan, prev, first);
dev_dbg(chan2dev(chan),
"%s: chain lld: prev=0x%p, mbr_nda=%pad\n",
__func__, prev, &prev->lld.mbr_nda);
first->tx_dma_desc.flags = flags; first->tx_dma_desc.flags = flags;
first->xfer_size = buf_len; first->xfer_size = buf_len;
first->direction = direction; first->direction = direction;
...@@ -1135,7 +1132,7 @@ static struct at_xdmac_desc *at_xdmac_memset_create_desc(struct dma_chan *chan, ...@@ -1135,7 +1132,7 @@ static struct at_xdmac_desc *at_xdmac_memset_create_desc(struct dma_chan *chan,
* SAMA5D4x), so we can use the same interface for source and dest, * SAMA5D4x), so we can use the same interface for source and dest,
* that solves the fact we don't know the direction. * that solves the fact we don't know the direction.
*/ */
u32 chan_cc = AT_XDMAC_CC_DAM_INCREMENTED_AM u32 chan_cc = AT_XDMAC_CC_DAM_UBS_AM
| AT_XDMAC_CC_SAM_INCREMENTED_AM | AT_XDMAC_CC_SAM_INCREMENTED_AM
| AT_XDMAC_CC_DIF(0) | AT_XDMAC_CC_DIF(0)
| AT_XDMAC_CC_SIF(0) | AT_XDMAC_CC_SIF(0)
...@@ -1203,6 +1200,168 @@ at_xdmac_prep_dma_memset(struct dma_chan *chan, dma_addr_t dest, int value, ...@@ -1203,6 +1200,168 @@ at_xdmac_prep_dma_memset(struct dma_chan *chan, dma_addr_t dest, int value,
return &desc->tx_dma_desc; return &desc->tx_dma_desc;
} }
static struct dma_async_tx_descriptor *
at_xdmac_prep_dma_memset_sg(struct dma_chan *chan, struct scatterlist *sgl,
unsigned int sg_len, int value,
unsigned long flags)
{
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
struct at_xdmac_desc *desc, *pdesc = NULL,
*ppdesc = NULL, *first = NULL;
struct scatterlist *sg, *psg = NULL, *ppsg = NULL;
size_t stride = 0, pstride = 0, len = 0;
int i;
if (!sgl)
return NULL;
dev_dbg(chan2dev(chan), "%s: sg_len=%d, value=0x%x, flags=0x%lx\n",
__func__, sg_len, value, flags);
/* Prepare descriptors. */
for_each_sg(sgl, sg, sg_len, i) {
dev_dbg(chan2dev(chan), "%s: dest=0x%08x, len=%d, pattern=0x%x, flags=0x%lx\n",
__func__, sg_dma_address(sg), sg_dma_len(sg),
value, flags);
desc = at_xdmac_memset_create_desc(chan, atchan,
sg_dma_address(sg),
sg_dma_len(sg),
value);
if (!desc && first)
list_splice_init(&first->descs_list,
&atchan->free_descs_list);
if (!first)
first = desc;
/* Update our strides */
pstride = stride;
if (psg)
stride = sg_dma_address(sg) -
(sg_dma_address(psg) + sg_dma_len(psg));
/*
* The scatterlist API gives us only the address and
* length of each elements.
*
* Unfortunately, we don't have the stride, which we
* will need to compute.
*
* That make us end up in a situation like this one:
* len stride len stride len
* +-------+ +-------+ +-------+
* | N-2 | | N-1 | | N |
* +-------+ +-------+ +-------+
*
* We need all these three elements (N-2, N-1 and N)
* to actually take the decision on whether we need to
* queue N-1 or reuse N-2.
*
* We will only consider N if it is the last element.
*/
if (ppdesc && pdesc) {
if ((stride == pstride) &&
(sg_dma_len(ppsg) == sg_dma_len(psg))) {
dev_dbg(chan2dev(chan),
"%s: desc 0x%p can be merged with desc 0x%p\n",
__func__, pdesc, ppdesc);
/*
* Increment the block count of the
* N-2 descriptor
*/
at_xdmac_increment_block_count(chan, ppdesc);
ppdesc->lld.mbr_dus = stride;
/*
* Put back the N-1 descriptor in the
* free descriptor list
*/
list_add_tail(&pdesc->desc_node,
&atchan->free_descs_list);
/*
* Make our N-1 descriptor pointer
* point to the N-2 since they were
* actually merged.
*/
pdesc = ppdesc;
/*
* Rule out the case where we don't have
* pstride computed yet (our second sg
* element)
*
* We also want to catch the case where there
* would be a negative stride,
*/
} else if (pstride ||
sg_dma_address(sg) < sg_dma_address(psg)) {
/*
* Queue the N-1 descriptor after the
* N-2
*/
at_xdmac_queue_desc(chan, ppdesc, pdesc);
/*
* Add the N-1 descriptor to the list
* of the descriptors used for this
* transfer
*/
list_add_tail(&desc->desc_node,
&first->descs_list);
dev_dbg(chan2dev(chan),
"%s: add desc 0x%p to descs_list 0x%p\n",
__func__, desc, first);
}
}
/*
* If we are the last element, just see if we have the
* same size than the previous element.
*
* If so, we can merge it with the previous descriptor
* since we don't care about the stride anymore.
*/
if ((i == (sg_len - 1)) &&
sg_dma_len(ppsg) == sg_dma_len(psg)) {
dev_dbg(chan2dev(chan),
"%s: desc 0x%p can be merged with desc 0x%p\n",
__func__, desc, pdesc);
/*
* Increment the block count of the N-1
* descriptor
*/
at_xdmac_increment_block_count(chan, pdesc);
pdesc->lld.mbr_dus = stride;
/*
* Put back the N descriptor in the free
* descriptor list
*/
list_add_tail(&desc->desc_node,
&atchan->free_descs_list);
}
/* Update our descriptors */
ppdesc = pdesc;
pdesc = desc;
/* Update our scatter pointers */
ppsg = psg;
psg = sg;
len += sg_dma_len(sg);
}
first->tx_dma_desc.cookie = -EBUSY;
first->tx_dma_desc.flags = flags;
first->xfer_size = len;
return &first->tx_dma_desc;
}
static enum dma_status static enum dma_status
at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie, at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
struct dma_tx_state *txstate) struct dma_tx_state *txstate)
...@@ -1736,6 +1895,7 @@ static int at_xdmac_probe(struct platform_device *pdev) ...@@ -1736,6 +1895,7 @@ static int at_xdmac_probe(struct platform_device *pdev)
dma_cap_set(DMA_INTERLEAVE, atxdmac->dma.cap_mask); dma_cap_set(DMA_INTERLEAVE, atxdmac->dma.cap_mask);
dma_cap_set(DMA_MEMCPY, atxdmac->dma.cap_mask); dma_cap_set(DMA_MEMCPY, atxdmac->dma.cap_mask);
dma_cap_set(DMA_MEMSET, atxdmac->dma.cap_mask); dma_cap_set(DMA_MEMSET, atxdmac->dma.cap_mask);
dma_cap_set(DMA_MEMSET_SG, atxdmac->dma.cap_mask);
dma_cap_set(DMA_SLAVE, atxdmac->dma.cap_mask); dma_cap_set(DMA_SLAVE, atxdmac->dma.cap_mask);
/* /*
* Without DMA_PRIVATE the driver is not able to allocate more than * Without DMA_PRIVATE the driver is not able to allocate more than
...@@ -1751,6 +1911,7 @@ static int at_xdmac_probe(struct platform_device *pdev) ...@@ -1751,6 +1911,7 @@ static int at_xdmac_probe(struct platform_device *pdev)
atxdmac->dma.device_prep_interleaved_dma = at_xdmac_prep_interleaved; atxdmac->dma.device_prep_interleaved_dma = at_xdmac_prep_interleaved;
atxdmac->dma.device_prep_dma_memcpy = at_xdmac_prep_dma_memcpy; atxdmac->dma.device_prep_dma_memcpy = at_xdmac_prep_dma_memcpy;
atxdmac->dma.device_prep_dma_memset = at_xdmac_prep_dma_memset; atxdmac->dma.device_prep_dma_memset = at_xdmac_prep_dma_memset;
atxdmac->dma.device_prep_dma_memset_sg = at_xdmac_prep_dma_memset_sg;
atxdmac->dma.device_prep_slave_sg = at_xdmac_prep_slave_sg; atxdmac->dma.device_prep_slave_sg = at_xdmac_prep_slave_sg;
atxdmac->dma.device_config = at_xdmac_device_config; atxdmac->dma.device_config = at_xdmac_device_config;
atxdmac->dma.device_pause = at_xdmac_device_pause; atxdmac->dma.device_pause = at_xdmac_device_pause;
......
...@@ -2730,7 +2730,7 @@ static int __init coh901318_probe(struct platform_device *pdev) ...@@ -2730,7 +2730,7 @@ static int __init coh901318_probe(struct platform_device *pdev)
* This controller can only access address at even 32bit boundaries, * This controller can only access address at even 32bit boundaries,
* i.e. 2^2 * i.e. 2^2
*/ */
base->dma_memcpy.copy_align = 2; base->dma_memcpy.copy_align = DMAENGINE_ALIGN_4_BYTES;
err = dma_async_device_register(&base->dma_memcpy); err = dma_async_device_register(&base->dma_memcpy);
if (err) if (err)
......
This diff is collapsed.
...@@ -145,7 +145,8 @@ struct jz4780_dma_dev { ...@@ -145,7 +145,8 @@ struct jz4780_dma_dev {
struct jz4780_dma_chan chan[JZ_DMA_NR_CHANNELS]; struct jz4780_dma_chan chan[JZ_DMA_NR_CHANNELS];
}; };
struct jz4780_dma_data { struct jz4780_dma_filter_data {
struct device_node *of_node;
uint32_t transfer_type; uint32_t transfer_type;
int channel; int channel;
}; };
...@@ -214,11 +215,25 @@ static void jz4780_dma_desc_free(struct virt_dma_desc *vdesc) ...@@ -214,11 +215,25 @@ static void jz4780_dma_desc_free(struct virt_dma_desc *vdesc)
kfree(desc); kfree(desc);
} }
static uint32_t jz4780_dma_transfer_size(unsigned long val, int *ord) static uint32_t jz4780_dma_transfer_size(unsigned long val, uint32_t *shift)
{ {
*ord = ffs(val) - 1; int ord = ffs(val) - 1;
switch (*ord) { /*
* 8 byte transfer sizes unsupported so fall back on 4. If it's larger
* than the maximum, just limit it. It is perfectly safe to fall back
* in this way since we won't exceed the maximum burst size supported
* by the device, the only effect is reduced efficiency. This is better
* than refusing to perform the request at all.
*/
if (ord == 3)
ord = 2;
else if (ord > 7)
ord = 7;
*shift = ord;
switch (ord) {
case 0: case 0:
return JZ_DMA_SIZE_1_BYTE; return JZ_DMA_SIZE_1_BYTE;
case 1: case 1:
...@@ -231,20 +246,17 @@ static uint32_t jz4780_dma_transfer_size(unsigned long val, int *ord) ...@@ -231,20 +246,17 @@ static uint32_t jz4780_dma_transfer_size(unsigned long val, int *ord)
return JZ_DMA_SIZE_32_BYTE; return JZ_DMA_SIZE_32_BYTE;
case 6: case 6:
return JZ_DMA_SIZE_64_BYTE; return JZ_DMA_SIZE_64_BYTE;
case 7:
return JZ_DMA_SIZE_128_BYTE;
default: default:
return -EINVAL; return JZ_DMA_SIZE_128_BYTE;
} }
} }
static uint32_t jz4780_dma_setup_hwdesc(struct jz4780_dma_chan *jzchan, static int jz4780_dma_setup_hwdesc(struct jz4780_dma_chan *jzchan,
struct jz4780_dma_hwdesc *desc, dma_addr_t addr, size_t len, struct jz4780_dma_hwdesc *desc, dma_addr_t addr, size_t len,
enum dma_transfer_direction direction) enum dma_transfer_direction direction)
{ {
struct dma_slave_config *config = &jzchan->config; struct dma_slave_config *config = &jzchan->config;
uint32_t width, maxburst, tsz; uint32_t width, maxburst, tsz;
int ord;
if (direction == DMA_MEM_TO_DEV) { if (direction == DMA_MEM_TO_DEV) {
desc->dcm = JZ_DMA_DCM_SAI; desc->dcm = JZ_DMA_DCM_SAI;
...@@ -271,8 +283,8 @@ static uint32_t jz4780_dma_setup_hwdesc(struct jz4780_dma_chan *jzchan, ...@@ -271,8 +283,8 @@ static uint32_t jz4780_dma_setup_hwdesc(struct jz4780_dma_chan *jzchan,
* divisible by the transfer size, and we must not use more than the * divisible by the transfer size, and we must not use more than the
* maximum burst specified by the user. * maximum burst specified by the user.
*/ */
tsz = jz4780_dma_transfer_size(addr | len | (width * maxburst), &ord); tsz = jz4780_dma_transfer_size(addr | len | (width * maxburst),
jzchan->transfer_shift = ord; &jzchan->transfer_shift);
switch (width) { switch (width) {
case DMA_SLAVE_BUSWIDTH_1_BYTE: case DMA_SLAVE_BUSWIDTH_1_BYTE:
...@@ -289,12 +301,14 @@ static uint32_t jz4780_dma_setup_hwdesc(struct jz4780_dma_chan *jzchan, ...@@ -289,12 +301,14 @@ static uint32_t jz4780_dma_setup_hwdesc(struct jz4780_dma_chan *jzchan,
desc->dcm |= width << JZ_DMA_DCM_SP_SHIFT; desc->dcm |= width << JZ_DMA_DCM_SP_SHIFT;
desc->dcm |= width << JZ_DMA_DCM_DP_SHIFT; desc->dcm |= width << JZ_DMA_DCM_DP_SHIFT;
desc->dtc = len >> ord; desc->dtc = len >> jzchan->transfer_shift;
return 0;
} }
static struct dma_async_tx_descriptor *jz4780_dma_prep_slave_sg( static struct dma_async_tx_descriptor *jz4780_dma_prep_slave_sg(
struct dma_chan *chan, struct scatterlist *sgl, unsigned int sg_len, struct dma_chan *chan, struct scatterlist *sgl, unsigned int sg_len,
enum dma_transfer_direction direction, unsigned long flags) enum dma_transfer_direction direction, unsigned long flags,
void *context)
{ {
struct jz4780_dma_chan *jzchan = to_jz4780_dma_chan(chan); struct jz4780_dma_chan *jzchan = to_jz4780_dma_chan(chan);
struct jz4780_dma_desc *desc; struct jz4780_dma_desc *desc;
...@@ -311,8 +325,7 @@ static struct dma_async_tx_descriptor *jz4780_dma_prep_slave_sg( ...@@ -311,8 +325,7 @@ static struct dma_async_tx_descriptor *jz4780_dma_prep_slave_sg(
sg_dma_len(&sgl[i]), sg_dma_len(&sgl[i]),
direction); direction);
if (err < 0) if (err < 0)
return ERR_PTR(err); return NULL;
desc->desc[i].dcm |= JZ_DMA_DCM_TIE; desc->desc[i].dcm |= JZ_DMA_DCM_TIE;
...@@ -356,7 +369,7 @@ static struct dma_async_tx_descriptor *jz4780_dma_prep_dma_cyclic( ...@@ -356,7 +369,7 @@ static struct dma_async_tx_descriptor *jz4780_dma_prep_dma_cyclic(
err = jz4780_dma_setup_hwdesc(jzchan, &desc->desc[i], buf_addr, err = jz4780_dma_setup_hwdesc(jzchan, &desc->desc[i], buf_addr,
period_len, direction); period_len, direction);
if (err < 0) if (err < 0)
return ERR_PTR(err); return NULL;
buf_addr += period_len; buf_addr += period_len;
...@@ -390,15 +403,13 @@ struct dma_async_tx_descriptor *jz4780_dma_prep_dma_memcpy( ...@@ -390,15 +403,13 @@ struct dma_async_tx_descriptor *jz4780_dma_prep_dma_memcpy(
struct jz4780_dma_chan *jzchan = to_jz4780_dma_chan(chan); struct jz4780_dma_chan *jzchan = to_jz4780_dma_chan(chan);
struct jz4780_dma_desc *desc; struct jz4780_dma_desc *desc;
uint32_t tsz; uint32_t tsz;
int ord;
desc = jz4780_dma_desc_alloc(jzchan, 1, DMA_MEMCPY); desc = jz4780_dma_desc_alloc(jzchan, 1, DMA_MEMCPY);
if (!desc) if (!desc)
return NULL; return NULL;
tsz = jz4780_dma_transfer_size(dest | src | len, &ord); tsz = jz4780_dma_transfer_size(dest | src | len,
if (tsz < 0) &jzchan->transfer_shift);
return ERR_PTR(tsz);
desc->desc[0].dsa = src; desc->desc[0].dsa = src;
desc->desc[0].dta = dest; desc->desc[0].dta = dest;
...@@ -407,7 +418,7 @@ struct dma_async_tx_descriptor *jz4780_dma_prep_dma_memcpy( ...@@ -407,7 +418,7 @@ struct dma_async_tx_descriptor *jz4780_dma_prep_dma_memcpy(
tsz << JZ_DMA_DCM_TSZ_SHIFT | tsz << JZ_DMA_DCM_TSZ_SHIFT |
JZ_DMA_WIDTH_32_BIT << JZ_DMA_DCM_SP_SHIFT | JZ_DMA_WIDTH_32_BIT << JZ_DMA_DCM_SP_SHIFT |
JZ_DMA_WIDTH_32_BIT << JZ_DMA_DCM_DP_SHIFT; JZ_DMA_WIDTH_32_BIT << JZ_DMA_DCM_DP_SHIFT;
desc->desc[0].dtc = len >> ord; desc->desc[0].dtc = len >> jzchan->transfer_shift;
return vchan_tx_prep(&jzchan->vchan, &desc->vdesc, flags); return vchan_tx_prep(&jzchan->vchan, &desc->vdesc, flags);
} }
...@@ -484,8 +495,9 @@ static void jz4780_dma_issue_pending(struct dma_chan *chan) ...@@ -484,8 +495,9 @@ static void jz4780_dma_issue_pending(struct dma_chan *chan)
spin_unlock_irqrestore(&jzchan->vchan.lock, flags); spin_unlock_irqrestore(&jzchan->vchan.lock, flags);
} }
static int jz4780_dma_terminate_all(struct jz4780_dma_chan *jzchan) static int jz4780_dma_terminate_all(struct dma_chan *chan)
{ {
struct jz4780_dma_chan *jzchan = to_jz4780_dma_chan(chan);
struct jz4780_dma_dev *jzdma = jz4780_dma_chan_parent(jzchan); struct jz4780_dma_dev *jzdma = jz4780_dma_chan_parent(jzchan);
unsigned long flags; unsigned long flags;
LIST_HEAD(head); LIST_HEAD(head);
...@@ -507,9 +519,11 @@ static int jz4780_dma_terminate_all(struct jz4780_dma_chan *jzchan) ...@@ -507,9 +519,11 @@ static int jz4780_dma_terminate_all(struct jz4780_dma_chan *jzchan)
return 0; return 0;
} }
static int jz4780_dma_slave_config(struct jz4780_dma_chan *jzchan, static int jz4780_dma_config(struct dma_chan *chan,
const struct dma_slave_config *config) struct dma_slave_config *config)
{ {
struct jz4780_dma_chan *jzchan = to_jz4780_dma_chan(chan);
if ((config->src_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES) if ((config->src_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)
|| (config->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)) || (config->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES))
return -EINVAL; return -EINVAL;
...@@ -671,7 +685,10 @@ static bool jz4780_dma_filter_fn(struct dma_chan *chan, void *param) ...@@ -671,7 +685,10 @@ static bool jz4780_dma_filter_fn(struct dma_chan *chan, void *param)
{ {
struct jz4780_dma_chan *jzchan = to_jz4780_dma_chan(chan); struct jz4780_dma_chan *jzchan = to_jz4780_dma_chan(chan);
struct jz4780_dma_dev *jzdma = jz4780_dma_chan_parent(jzchan); struct jz4780_dma_dev *jzdma = jz4780_dma_chan_parent(jzchan);
struct jz4780_dma_data *data = param; struct jz4780_dma_filter_data *data = param;
if (jzdma->dma_device.dev->of_node != data->of_node)
return false;
if (data->channel > -1) { if (data->channel > -1) {
if (data->channel != jzchan->id) if (data->channel != jzchan->id)
...@@ -690,11 +707,12 @@ static struct dma_chan *jz4780_of_dma_xlate(struct of_phandle_args *dma_spec, ...@@ -690,11 +707,12 @@ static struct dma_chan *jz4780_of_dma_xlate(struct of_phandle_args *dma_spec,
{ {
struct jz4780_dma_dev *jzdma = ofdma->of_dma_data; struct jz4780_dma_dev *jzdma = ofdma->of_dma_data;
dma_cap_mask_t mask = jzdma->dma_device.cap_mask; dma_cap_mask_t mask = jzdma->dma_device.cap_mask;
struct jz4780_dma_data data; struct jz4780_dma_filter_data data;
if (dma_spec->args_count != 2) if (dma_spec->args_count != 2)
return NULL; return NULL;
data.of_node = ofdma->of_node;
data.transfer_type = dma_spec->args[0]; data.transfer_type = dma_spec->args[0];
data.channel = dma_spec->args[1]; data.channel = dma_spec->args[1];
...@@ -713,9 +731,14 @@ static struct dma_chan *jz4780_of_dma_xlate(struct of_phandle_args *dma_spec, ...@@ -713,9 +731,14 @@ static struct dma_chan *jz4780_of_dma_xlate(struct of_phandle_args *dma_spec,
data.channel); data.channel);
return NULL; return NULL;
} }
}
jzdma->chan[data.channel].transfer_type = data.transfer_type;
return dma_get_slave_channel(
&jzdma->chan[data.channel].vchan.chan);
} else {
return dma_request_channel(mask, jz4780_dma_filter_fn, &data); return dma_request_channel(mask, jz4780_dma_filter_fn, &data);
}
} }
static int jz4780_dma_probe(struct platform_device *pdev) static int jz4780_dma_probe(struct platform_device *pdev)
...@@ -743,23 +766,26 @@ static int jz4780_dma_probe(struct platform_device *pdev) ...@@ -743,23 +766,26 @@ static int jz4780_dma_probe(struct platform_device *pdev)
if (IS_ERR(jzdma->base)) if (IS_ERR(jzdma->base))
return PTR_ERR(jzdma->base); return PTR_ERR(jzdma->base);
jzdma->irq = platform_get_irq(pdev, 0); ret = platform_get_irq(pdev, 0);
if (jzdma->irq < 0) { if (ret < 0) {
dev_err(dev, "failed to get IRQ: %d\n", ret); dev_err(dev, "failed to get IRQ: %d\n", ret);
return jzdma->irq; return ret;
} }
ret = devm_request_irq(dev, jzdma->irq, jz4780_dma_irq_handler, 0, jzdma->irq = ret;
dev_name(dev), jzdma);
ret = request_irq(jzdma->irq, jz4780_dma_irq_handler, 0, dev_name(dev),
jzdma);
if (ret) { if (ret) {
dev_err(dev, "failed to request IRQ %u!\n", jzdma->irq); dev_err(dev, "failed to request IRQ %u!\n", jzdma->irq);
return -EINVAL; return ret;
} }
jzdma->clk = devm_clk_get(dev, NULL); jzdma->clk = devm_clk_get(dev, NULL);
if (IS_ERR(jzdma->clk)) { if (IS_ERR(jzdma->clk)) {
dev_err(dev, "failed to get clock\n"); dev_err(dev, "failed to get clock\n");
return PTR_ERR(jzdma->clk); ret = PTR_ERR(jzdma->clk);
goto err_free_irq;
} }
clk_prepare_enable(jzdma->clk); clk_prepare_enable(jzdma->clk);
...@@ -775,13 +801,13 @@ static int jz4780_dma_probe(struct platform_device *pdev) ...@@ -775,13 +801,13 @@ static int jz4780_dma_probe(struct platform_device *pdev)
dma_cap_set(DMA_CYCLIC, dd->cap_mask); dma_cap_set(DMA_CYCLIC, dd->cap_mask);
dd->dev = dev; dd->dev = dev;
dd->copy_align = 2; /* 2^2 = 4 byte alignment */ dd->copy_align = DMAENGINE_ALIGN_4_BYTES;
dd->device_alloc_chan_resources = jz4780_dma_alloc_chan_resources; dd->device_alloc_chan_resources = jz4780_dma_alloc_chan_resources;
dd->device_free_chan_resources = jz4780_dma_free_chan_resources; dd->device_free_chan_resources = jz4780_dma_free_chan_resources;
dd->device_prep_slave_sg = jz4780_dma_prep_slave_sg; dd->device_prep_slave_sg = jz4780_dma_prep_slave_sg;
dd->device_prep_dma_cyclic = jz4780_dma_prep_dma_cyclic; dd->device_prep_dma_cyclic = jz4780_dma_prep_dma_cyclic;
dd->device_prep_dma_memcpy = jz4780_dma_prep_dma_memcpy; dd->device_prep_dma_memcpy = jz4780_dma_prep_dma_memcpy;
dd->device_config = jz4780_dma_slave_config; dd->device_config = jz4780_dma_config;
dd->device_terminate_all = jz4780_dma_terminate_all; dd->device_terminate_all = jz4780_dma_terminate_all;
dd->device_tx_status = jz4780_dma_tx_status; dd->device_tx_status = jz4780_dma_tx_status;
dd->device_issue_pending = jz4780_dma_issue_pending; dd->device_issue_pending = jz4780_dma_issue_pending;
...@@ -790,7 +816,6 @@ static int jz4780_dma_probe(struct platform_device *pdev) ...@@ -790,7 +816,6 @@ static int jz4780_dma_probe(struct platform_device *pdev)
dd->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); dd->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
dd->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; dd->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
/* /*
* Enable DMA controller, mark all channels as not programmable. * Enable DMA controller, mark all channels as not programmable.
* Also set the FMSC bit - it increases MSC performance, so it makes * Also set the FMSC bit - it increases MSC performance, so it makes
...@@ -832,15 +857,24 @@ static int jz4780_dma_probe(struct platform_device *pdev) ...@@ -832,15 +857,24 @@ static int jz4780_dma_probe(struct platform_device *pdev)
err_disable_clk: err_disable_clk:
clk_disable_unprepare(jzdma->clk); clk_disable_unprepare(jzdma->clk);
err_free_irq:
free_irq(jzdma->irq, jzdma);
return ret; return ret;
} }
static int jz4780_dma_remove(struct platform_device *pdev) static int jz4780_dma_remove(struct platform_device *pdev)
{ {
struct jz4780_dma_dev *jzdma = platform_get_drvdata(pdev); struct jz4780_dma_dev *jzdma = platform_get_drvdata(pdev);
int i;
of_dma_controller_free(pdev->dev.of_node); of_dma_controller_free(pdev->dev.of_node);
devm_free_irq(&pdev->dev, jzdma->irq, jzdma);
free_irq(jzdma->irq, jzdma);
for (i = 0; i < JZ_DMA_NR_CHANNELS; i++)
tasklet_kill(&jzdma->chan[i].vchan.task);
dma_async_device_unregister(&jzdma->dma_device); dma_async_device_unregister(&jzdma->dma_device);
return 0; return 0;
} }
......
...@@ -6,6 +6,9 @@ config DW_DMAC_CORE ...@@ -6,6 +6,9 @@ config DW_DMAC_CORE
tristate tristate
select DMA_ENGINE select DMA_ENGINE
config DW_DMAC_BIG_ENDIAN_IO
bool
config DW_DMAC config DW_DMAC
tristate "Synopsys DesignWare AHB DMA platform driver" tristate "Synopsys DesignWare AHB DMA platform driver"
select DW_DMAC_CORE select DW_DMAC_CORE
...@@ -23,6 +26,3 @@ config DW_DMAC_PCI ...@@ -23,6 +26,3 @@ config DW_DMAC_PCI
Support the Synopsys DesignWare AHB DMA controller on the Support the Synopsys DesignWare AHB DMA controller on the
platfroms that enumerate it as a PCI device. For example, platfroms that enumerate it as a PCI device. For example,
Intel Medfield has integrated this GPDMA controller. Intel Medfield has integrated this GPDMA controller.
config DW_DMAC_BIG_ENDIAN_IO
bool
...@@ -1000,7 +1000,7 @@ static void edma_dma_init(struct edma_cc *ecc, struct dma_device *dma, ...@@ -1000,7 +1000,7 @@ static void edma_dma_init(struct edma_cc *ecc, struct dma_device *dma,
* code using dma memcpy must make sure alignment of * code using dma memcpy must make sure alignment of
* length is at dma->copy_align boundary. * length is at dma->copy_align boundary.
*/ */
dma->copy_align = DMA_SLAVE_BUSWIDTH_4_BYTES; dma->copy_align = DMAENGINE_ALIGN_4_BYTES;
INIT_LIST_HEAD(&dma->channels); INIT_LIST_HEAD(&dma->channels);
} }
......
...@@ -99,21 +99,13 @@ static void hsu_dma_chan_start(struct hsu_dma_chan *hsuc) ...@@ -99,21 +99,13 @@ static void hsu_dma_chan_start(struct hsu_dma_chan *hsuc)
static void hsu_dma_stop_channel(struct hsu_dma_chan *hsuc) static void hsu_dma_stop_channel(struct hsu_dma_chan *hsuc)
{ {
unsigned long flags;
spin_lock_irqsave(&hsuc->lock, flags);
hsu_chan_disable(hsuc); hsu_chan_disable(hsuc);
hsu_chan_writel(hsuc, HSU_CH_DCR, 0); hsu_chan_writel(hsuc, HSU_CH_DCR, 0);
spin_unlock_irqrestore(&hsuc->lock, flags);
} }
static void hsu_dma_start_channel(struct hsu_dma_chan *hsuc) static void hsu_dma_start_channel(struct hsu_dma_chan *hsuc)
{ {
unsigned long flags;
spin_lock_irqsave(&hsuc->lock, flags);
hsu_dma_chan_start(hsuc); hsu_dma_chan_start(hsuc);
spin_unlock_irqrestore(&hsuc->lock, flags);
} }
static void hsu_dma_start_transfer(struct hsu_dma_chan *hsuc) static void hsu_dma_start_transfer(struct hsu_dma_chan *hsuc)
...@@ -139,9 +131,9 @@ static u32 hsu_dma_chan_get_sr(struct hsu_dma_chan *hsuc) ...@@ -139,9 +131,9 @@ static u32 hsu_dma_chan_get_sr(struct hsu_dma_chan *hsuc)
unsigned long flags; unsigned long flags;
u32 sr; u32 sr;
spin_lock_irqsave(&hsuc->lock, flags); spin_lock_irqsave(&hsuc->vchan.lock, flags);
sr = hsu_chan_readl(hsuc, HSU_CH_SR); sr = hsu_chan_readl(hsuc, HSU_CH_SR);
spin_unlock_irqrestore(&hsuc->lock, flags); spin_unlock_irqrestore(&hsuc->vchan.lock, flags);
return sr; return sr;
} }
...@@ -273,14 +265,11 @@ static size_t hsu_dma_active_desc_size(struct hsu_dma_chan *hsuc) ...@@ -273,14 +265,11 @@ static size_t hsu_dma_active_desc_size(struct hsu_dma_chan *hsuc)
struct hsu_dma_desc *desc = hsuc->desc; struct hsu_dma_desc *desc = hsuc->desc;
size_t bytes = hsu_dma_desc_size(desc); size_t bytes = hsu_dma_desc_size(desc);
int i; int i;
unsigned long flags;
spin_lock_irqsave(&hsuc->lock, flags);
i = desc->active % HSU_DMA_CHAN_NR_DESC; i = desc->active % HSU_DMA_CHAN_NR_DESC;
do { do {
bytes += hsu_chan_readl(hsuc, HSU_CH_DxTSR(i)); bytes += hsu_chan_readl(hsuc, HSU_CH_DxTSR(i));
} while (--i >= 0); } while (--i >= 0);
spin_unlock_irqrestore(&hsuc->lock, flags);
return bytes; return bytes;
} }
...@@ -327,24 +316,6 @@ static int hsu_dma_slave_config(struct dma_chan *chan, ...@@ -327,24 +316,6 @@ static int hsu_dma_slave_config(struct dma_chan *chan,
return 0; return 0;
} }
static void hsu_dma_chan_deactivate(struct hsu_dma_chan *hsuc)
{
unsigned long flags;
spin_lock_irqsave(&hsuc->lock, flags);
hsu_chan_disable(hsuc);
spin_unlock_irqrestore(&hsuc->lock, flags);
}
static void hsu_dma_chan_activate(struct hsu_dma_chan *hsuc)
{
unsigned long flags;
spin_lock_irqsave(&hsuc->lock, flags);
hsu_chan_enable(hsuc);
spin_unlock_irqrestore(&hsuc->lock, flags);
}
static int hsu_dma_pause(struct dma_chan *chan) static int hsu_dma_pause(struct dma_chan *chan)
{ {
struct hsu_dma_chan *hsuc = to_hsu_dma_chan(chan); struct hsu_dma_chan *hsuc = to_hsu_dma_chan(chan);
...@@ -352,7 +323,7 @@ static int hsu_dma_pause(struct dma_chan *chan) ...@@ -352,7 +323,7 @@ static int hsu_dma_pause(struct dma_chan *chan)
spin_lock_irqsave(&hsuc->vchan.lock, flags); spin_lock_irqsave(&hsuc->vchan.lock, flags);
if (hsuc->desc && hsuc->desc->status == DMA_IN_PROGRESS) { if (hsuc->desc && hsuc->desc->status == DMA_IN_PROGRESS) {
hsu_dma_chan_deactivate(hsuc); hsu_chan_disable(hsuc);
hsuc->desc->status = DMA_PAUSED; hsuc->desc->status = DMA_PAUSED;
} }
spin_unlock_irqrestore(&hsuc->vchan.lock, flags); spin_unlock_irqrestore(&hsuc->vchan.lock, flags);
...@@ -368,7 +339,7 @@ static int hsu_dma_resume(struct dma_chan *chan) ...@@ -368,7 +339,7 @@ static int hsu_dma_resume(struct dma_chan *chan)
spin_lock_irqsave(&hsuc->vchan.lock, flags); spin_lock_irqsave(&hsuc->vchan.lock, flags);
if (hsuc->desc && hsuc->desc->status == DMA_PAUSED) { if (hsuc->desc && hsuc->desc->status == DMA_PAUSED) {
hsuc->desc->status = DMA_IN_PROGRESS; hsuc->desc->status = DMA_IN_PROGRESS;
hsu_dma_chan_activate(hsuc); hsu_chan_enable(hsuc);
} }
spin_unlock_irqrestore(&hsuc->vchan.lock, flags); spin_unlock_irqrestore(&hsuc->vchan.lock, flags);
...@@ -441,8 +412,6 @@ int hsu_dma_probe(struct hsu_dma_chip *chip) ...@@ -441,8 +412,6 @@ int hsu_dma_probe(struct hsu_dma_chip *chip)
hsuc->direction = (i & 0x1) ? DMA_DEV_TO_MEM : DMA_MEM_TO_DEV; hsuc->direction = (i & 0x1) ? DMA_DEV_TO_MEM : DMA_MEM_TO_DEV;
hsuc->reg = addr + i * HSU_DMA_CHAN_LENGTH; hsuc->reg = addr + i * HSU_DMA_CHAN_LENGTH;
spin_lock_init(&hsuc->lock);
} }
dma_cap_set(DMA_SLAVE, hsu->dma.cap_mask); dma_cap_set(DMA_SLAVE, hsu->dma.cap_mask);
......
...@@ -78,7 +78,6 @@ struct hsu_dma_chan { ...@@ -78,7 +78,6 @@ struct hsu_dma_chan {
struct virt_dma_chan vchan; struct virt_dma_chan vchan;
void __iomem *reg; void __iomem *reg;
spinlock_t lock;
/* hardware configuration */ /* hardware configuration */
enum dma_transfer_direction direction; enum dma_transfer_direction direction;
......
...@@ -1083,8 +1083,12 @@ static int __init imxdma_probe(struct platform_device *pdev) ...@@ -1083,8 +1083,12 @@ static int __init imxdma_probe(struct platform_device *pdev)
if (IS_ERR(imxdma->dma_ahb)) if (IS_ERR(imxdma->dma_ahb))
return PTR_ERR(imxdma->dma_ahb); return PTR_ERR(imxdma->dma_ahb);
clk_prepare_enable(imxdma->dma_ipg); ret = clk_prepare_enable(imxdma->dma_ipg);
clk_prepare_enable(imxdma->dma_ahb); if (ret)
return ret;
ret = clk_prepare_enable(imxdma->dma_ahb);
if (ret)
goto disable_dma_ipg_clk;
/* reset DMA module */ /* reset DMA module */
imx_dmav1_writel(imxdma, DCR_DRST, DMA_DCR); imx_dmav1_writel(imxdma, DCR_DRST, DMA_DCR);
...@@ -1094,20 +1098,20 @@ static int __init imxdma_probe(struct platform_device *pdev) ...@@ -1094,20 +1098,20 @@ static int __init imxdma_probe(struct platform_device *pdev)
dma_irq_handler, 0, "DMA", imxdma); dma_irq_handler, 0, "DMA", imxdma);
if (ret) { if (ret) {
dev_warn(imxdma->dev, "Can't register IRQ for DMA\n"); dev_warn(imxdma->dev, "Can't register IRQ for DMA\n");
goto err; goto disable_dma_ahb_clk;
} }
irq_err = platform_get_irq(pdev, 1); irq_err = platform_get_irq(pdev, 1);
if (irq_err < 0) { if (irq_err < 0) {
ret = irq_err; ret = irq_err;
goto err; goto disable_dma_ahb_clk;
} }
ret = devm_request_irq(&pdev->dev, irq_err, ret = devm_request_irq(&pdev->dev, irq_err,
imxdma_err_handler, 0, "DMA", imxdma); imxdma_err_handler, 0, "DMA", imxdma);
if (ret) { if (ret) {
dev_warn(imxdma->dev, "Can't register ERRIRQ for DMA\n"); dev_warn(imxdma->dev, "Can't register ERRIRQ for DMA\n");
goto err; goto disable_dma_ahb_clk;
} }
} }
...@@ -1144,7 +1148,7 @@ static int __init imxdma_probe(struct platform_device *pdev) ...@@ -1144,7 +1148,7 @@ static int __init imxdma_probe(struct platform_device *pdev)
dev_warn(imxdma->dev, "Can't register IRQ %d " dev_warn(imxdma->dev, "Can't register IRQ %d "
"for DMA channel %d\n", "for DMA channel %d\n",
irq + i, i); irq + i, i);
goto err; goto disable_dma_ahb_clk;
} }
init_timer(&imxdmac->watchdog); init_timer(&imxdmac->watchdog);
imxdmac->watchdog.function = &imxdma_watchdog; imxdmac->watchdog.function = &imxdma_watchdog;
...@@ -1183,14 +1187,14 @@ static int __init imxdma_probe(struct platform_device *pdev) ...@@ -1183,14 +1187,14 @@ static int __init imxdma_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, imxdma); platform_set_drvdata(pdev, imxdma);
imxdma->dma_device.copy_align = 2; /* 2^2 = 4 bytes alignment */ imxdma->dma_device.copy_align = DMAENGINE_ALIGN_4_BYTES;
imxdma->dma_device.dev->dma_parms = &imxdma->dma_parms; imxdma->dma_device.dev->dma_parms = &imxdma->dma_parms;
dma_set_max_seg_size(imxdma->dma_device.dev, 0xffffff); dma_set_max_seg_size(imxdma->dma_device.dev, 0xffffff);
ret = dma_async_device_register(&imxdma->dma_device); ret = dma_async_device_register(&imxdma->dma_device);
if (ret) { if (ret) {
dev_err(&pdev->dev, "unable to register\n"); dev_err(&pdev->dev, "unable to register\n");
goto err; goto disable_dma_ahb_clk;
} }
if (pdev->dev.of_node) { if (pdev->dev.of_node) {
...@@ -1206,9 +1210,10 @@ static int __init imxdma_probe(struct platform_device *pdev) ...@@ -1206,9 +1210,10 @@ static int __init imxdma_probe(struct platform_device *pdev)
err_of_dma_controller: err_of_dma_controller:
dma_async_device_unregister(&imxdma->dma_device); dma_async_device_unregister(&imxdma->dma_device);
err: disable_dma_ahb_clk:
clk_disable_unprepare(imxdma->dma_ipg);
clk_disable_unprepare(imxdma->dma_ahb); clk_disable_unprepare(imxdma->dma_ahb);
disable_dma_ipg_clk:
clk_disable_unprepare(imxdma->dma_ipg);
return ret; return ret;
} }
......
This diff is collapsed.
obj-$(CONFIG_INTEL_IOATDMA) += ioatdma.o obj-$(CONFIG_INTEL_IOATDMA) += ioatdma.o
ioatdma-y := pci.o dma.o dma_v2.o dma_v3.o dca.o ioatdma-y := init.o dma.o prep.o dca.o sysfs.o
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/*
* Copyright(c) 2004 - 2009 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in the
* file called COPYING.
*/
#ifndef IOATDMA_V2_H
#define IOATDMA_V2_H
#include <linux/dmaengine.h>
#include <linux/circ_buf.h>
#include "dma.h"
#include "hw.h"
extern int ioat_pending_level;
extern int ioat_ring_alloc_order;
/*
* workaround for IOAT ver.3.0 null descriptor issue
* (channel returns error when size is 0)
*/
#define NULL_DESC_BUFFER_SIZE 1
#define IOAT_MAX_ORDER 16
#define ioat_get_alloc_order() \
(min(ioat_ring_alloc_order, IOAT_MAX_ORDER))
#define ioat_get_max_alloc_order() \
(min(ioat_ring_max_alloc_order, IOAT_MAX_ORDER))
/* struct ioat2_dma_chan - ioat v2 / v3 channel attributes
* @base: common ioat channel parameters
* @xfercap_log; log2 of channel max transfer length (for fast division)
* @head: allocated index
* @issued: hardware notification point
* @tail: cleanup index
* @dmacount: identical to 'head' except for occasionally resetting to zero
* @alloc_order: log2 of the number of allocated descriptors
* @produce: number of descriptors to produce at submit time
* @ring: software ring buffer implementation of hardware ring
* @prep_lock: serializes descriptor preparation (producers)
*/
struct ioat2_dma_chan {
struct ioat_chan_common base;
size_t xfercap_log;
u16 head;
u16 issued;
u16 tail;
u16 dmacount;
u16 alloc_order;
u16 produce;
struct ioat_ring_ent **ring;
spinlock_t prep_lock;
};
static inline struct ioat2_dma_chan *to_ioat2_chan(struct dma_chan *c)
{
struct ioat_chan_common *chan = to_chan_common(c);
return container_of(chan, struct ioat2_dma_chan, base);
}
static inline u32 ioat2_ring_size(struct ioat2_dma_chan *ioat)
{
return 1 << ioat->alloc_order;
}
/* count of descriptors in flight with the engine */
static inline u16 ioat2_ring_active(struct ioat2_dma_chan *ioat)
{
return CIRC_CNT(ioat->head, ioat->tail, ioat2_ring_size(ioat));
}
/* count of descriptors pending submission to hardware */
static inline u16 ioat2_ring_pending(struct ioat2_dma_chan *ioat)
{
return CIRC_CNT(ioat->head, ioat->issued, ioat2_ring_size(ioat));
}
static inline u32 ioat2_ring_space(struct ioat2_dma_chan *ioat)
{
return ioat2_ring_size(ioat) - ioat2_ring_active(ioat);
}
static inline u16 ioat2_xferlen_to_descs(struct ioat2_dma_chan *ioat, size_t len)
{
u16 num_descs = len >> ioat->xfercap_log;
num_descs += !!(len & ((1 << ioat->xfercap_log) - 1));
return num_descs;
}
/**
* struct ioat_ring_ent - wrapper around hardware descriptor
* @hw: hardware DMA descriptor (for memcpy)
* @fill: hardware fill descriptor
* @xor: hardware xor descriptor
* @xor_ex: hardware xor extension descriptor
* @pq: hardware pq descriptor
* @pq_ex: hardware pq extension descriptor
* @pqu: hardware pq update descriptor
* @raw: hardware raw (un-typed) descriptor
* @txd: the generic software descriptor for all engines
* @len: total transaction length for unmap
* @result: asynchronous result of validate operations
* @id: identifier for debug
*/
struct ioat_ring_ent {
union {
struct ioat_dma_descriptor *hw;
struct ioat_xor_descriptor *xor;
struct ioat_xor_ext_descriptor *xor_ex;
struct ioat_pq_descriptor *pq;
struct ioat_pq_ext_descriptor *pq_ex;
struct ioat_pq_update_descriptor *pqu;
struct ioat_raw_descriptor *raw;
};
size_t len;
struct dma_async_tx_descriptor txd;
enum sum_check_flags *result;
#ifdef DEBUG
int id;
#endif
struct ioat_sed_ent *sed;
};
static inline struct ioat_ring_ent *
ioat2_get_ring_ent(struct ioat2_dma_chan *ioat, u16 idx)
{
return ioat->ring[idx & (ioat2_ring_size(ioat) - 1)];
}
static inline void ioat2_set_chainaddr(struct ioat2_dma_chan *ioat, u64 addr)
{
struct ioat_chan_common *chan = &ioat->base;
writel(addr & 0x00000000FFFFFFFF,
chan->reg_base + IOAT2_CHAINADDR_OFFSET_LOW);
writel(addr >> 32,
chan->reg_base + IOAT2_CHAINADDR_OFFSET_HIGH);
}
int ioat2_dma_probe(struct ioatdma_device *dev, int dca);
int ioat3_dma_probe(struct ioatdma_device *dev, int dca);
struct dca_provider *ioat2_dca_init(struct pci_dev *pdev, void __iomem *iobase);
struct dca_provider *ioat3_dca_init(struct pci_dev *pdev, void __iomem *iobase);
int ioat2_check_space_lock(struct ioat2_dma_chan *ioat, int num_descs);
int ioat2_enumerate_channels(struct ioatdma_device *device);
struct dma_async_tx_descriptor *
ioat2_dma_prep_memcpy_lock(struct dma_chan *c, dma_addr_t dma_dest,
dma_addr_t dma_src, size_t len, unsigned long flags);
void ioat2_issue_pending(struct dma_chan *chan);
int ioat2_alloc_chan_resources(struct dma_chan *c);
void ioat2_free_chan_resources(struct dma_chan *c);
void __ioat2_restart_chan(struct ioat2_dma_chan *ioat);
bool reshape_ring(struct ioat2_dma_chan *ioat, int order);
void __ioat2_issue_pending(struct ioat2_dma_chan *ioat);
void ioat2_cleanup_event(unsigned long data);
void ioat2_timer_event(unsigned long data);
int ioat2_quiesce(struct ioat_chan_common *chan, unsigned long tmo);
int ioat2_reset_sync(struct ioat_chan_common *chan, unsigned long tmo);
extern struct kobj_type ioat2_ktype;
extern struct kmem_cache *ioat2_cache;
#endif /* IOATDMA_V2_H */
...@@ -21,11 +21,6 @@ ...@@ -21,11 +21,6 @@
#define IOAT_MMIO_BAR 0 #define IOAT_MMIO_BAR 0
/* CB device ID's */ /* CB device ID's */
#define IOAT_PCI_DID_5000 0x1A38
#define IOAT_PCI_DID_CNB 0x360B
#define IOAT_PCI_DID_SCNB 0x65FF
#define IOAT_PCI_DID_SNB 0x402F
#define PCI_DEVICE_ID_INTEL_IOAT_IVB0 0x0e20 #define PCI_DEVICE_ID_INTEL_IOAT_IVB0 0x0e20
#define PCI_DEVICE_ID_INTEL_IOAT_IVB1 0x0e21 #define PCI_DEVICE_ID_INTEL_IOAT_IVB1 0x0e21
#define PCI_DEVICE_ID_INTEL_IOAT_IVB2 0x0e22 #define PCI_DEVICE_ID_INTEL_IOAT_IVB2 0x0e22
...@@ -58,6 +53,17 @@ ...@@ -58,6 +53,17 @@
#define PCI_DEVICE_ID_INTEL_IOAT_BDXDE2 0x6f52 #define PCI_DEVICE_ID_INTEL_IOAT_BDXDE2 0x6f52
#define PCI_DEVICE_ID_INTEL_IOAT_BDXDE3 0x6f53 #define PCI_DEVICE_ID_INTEL_IOAT_BDXDE3 0x6f53
#define PCI_DEVICE_ID_INTEL_IOAT_BDX0 0x6f20
#define PCI_DEVICE_ID_INTEL_IOAT_BDX1 0x6f21
#define PCI_DEVICE_ID_INTEL_IOAT_BDX2 0x6f22
#define PCI_DEVICE_ID_INTEL_IOAT_BDX3 0x6f23
#define PCI_DEVICE_ID_INTEL_IOAT_BDX4 0x6f24
#define PCI_DEVICE_ID_INTEL_IOAT_BDX5 0x6f25
#define PCI_DEVICE_ID_INTEL_IOAT_BDX6 0x6f26
#define PCI_DEVICE_ID_INTEL_IOAT_BDX7 0x6f27
#define PCI_DEVICE_ID_INTEL_IOAT_BDX8 0x6f2e
#define PCI_DEVICE_ID_INTEL_IOAT_BDX9 0x6f2f
#define IOAT_VER_1_2 0x12 /* Version 1.2 */ #define IOAT_VER_1_2 0x12 /* Version 1.2 */
#define IOAT_VER_2_0 0x20 /* Version 2.0 */ #define IOAT_VER_2_0 0x20 /* Version 2.0 */
#define IOAT_VER_3_0 0x30 /* Version 3.0 */ #define IOAT_VER_3_0 0x30 /* Version 3.0 */
......
This diff is collapsed.
/*
* Intel I/OAT DMA Linux driver
* Copyright(c) 2007 - 2009 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
*/
/*
* This driver supports an Intel I/OAT DMA engine, which does asynchronous
* copy operations.
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/interrupt.h>
#include <linux/dca.h>
#include <linux/slab.h>
#include "dma.h"
#include "dma_v2.h"
#include "registers.h"
#include "hw.h"
MODULE_VERSION(IOAT_DMA_VERSION);
MODULE_LICENSE("Dual BSD/GPL");
MODULE_AUTHOR("Intel Corporation");
static struct pci_device_id ioat_pci_tbl[] = {
/* I/OAT v1 platforms */
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_CNB) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_SCNB) },
{ PCI_VDEVICE(UNISYS, PCI_DEVICE_ID_UNISYS_DMA_DIRECTOR) },
/* I/OAT v2 platforms */
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_SNB) },
/* I/OAT v3 platforms */
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_TBG0) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_TBG1) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_TBG2) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_TBG3) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_TBG4) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_TBG5) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_TBG6) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_TBG7) },
/* I/OAT v3.2 platforms */
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_JSF0) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_JSF1) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_JSF2) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_JSF3) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_JSF4) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_JSF5) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_JSF6) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_JSF7) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_JSF8) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_JSF9) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_SNB0) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_SNB1) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_SNB2) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_SNB3) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_SNB4) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_SNB5) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_SNB6) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_SNB7) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_SNB8) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_SNB9) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_IVB0) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_IVB1) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_IVB2) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_IVB3) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_IVB4) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_IVB5) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_IVB6) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_IVB7) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_IVB8) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_IVB9) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_HSW0) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_HSW1) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_HSW2) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_HSW3) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_HSW4) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_HSW5) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_HSW6) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_HSW7) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_HSW8) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_HSW9) },
/* I/OAT v3.3 platforms */
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BWD0) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BWD1) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BWD2) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BWD3) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BDXDE0) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BDXDE1) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BDXDE2) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BDXDE3) },
{ 0, }
};
MODULE_DEVICE_TABLE(pci, ioat_pci_tbl);
static int ioat_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id);
static void ioat_remove(struct pci_dev *pdev);
static int ioat_dca_enabled = 1;
module_param(ioat_dca_enabled, int, 0644);
MODULE_PARM_DESC(ioat_dca_enabled, "control support of dca service (default: 1)");
struct kmem_cache *ioat2_cache;
struct kmem_cache *ioat3_sed_cache;
#define DRV_NAME "ioatdma"
static struct pci_driver ioat_pci_driver = {
.name = DRV_NAME,
.id_table = ioat_pci_tbl,
.probe = ioat_pci_probe,
.remove = ioat_remove,
};
static struct ioatdma_device *
alloc_ioatdma(struct pci_dev *pdev, void __iomem *iobase)
{
struct device *dev = &pdev->dev;
struct ioatdma_device *d = devm_kzalloc(dev, sizeof(*d), GFP_KERNEL);
if (!d)
return NULL;
d->pdev = pdev;
d->reg_base = iobase;
return d;
}
static int ioat_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
void __iomem * const *iomap;
struct device *dev = &pdev->dev;
struct ioatdma_device *device;
int err;
err = pcim_enable_device(pdev);
if (err)
return err;
err = pcim_iomap_regions(pdev, 1 << IOAT_MMIO_BAR, DRV_NAME);
if (err)
return err;
iomap = pcim_iomap_table(pdev);
if (!iomap)
return -ENOMEM;
err = pci_set_dma_mask(pdev, DMA_BIT_MASK(64));
if (err)
err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
if (err)
return err;
err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
if (err)
err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
if (err)
return err;
device = alloc_ioatdma(pdev, iomap[IOAT_MMIO_BAR]);
if (!device)
return -ENOMEM;
pci_set_master(pdev);
pci_set_drvdata(pdev, device);
device->version = readb(device->reg_base + IOAT_VER_OFFSET);
if (device->version == IOAT_VER_1_2)
err = ioat1_dma_probe(device, ioat_dca_enabled);
else if (device->version == IOAT_VER_2_0)
err = ioat2_dma_probe(device, ioat_dca_enabled);
else if (device->version >= IOAT_VER_3_0)
err = ioat3_dma_probe(device, ioat_dca_enabled);
else
return -ENODEV;
if (err) {
dev_err(dev, "Intel(R) I/OAT DMA Engine init failed\n");
return -ENODEV;
}
return 0;
}
static void ioat_remove(struct pci_dev *pdev)
{
struct ioatdma_device *device = pci_get_drvdata(pdev);
if (!device)
return;
dev_err(&pdev->dev, "Removing dma and dca services\n");
if (device->dca) {
unregister_dca_provider(device->dca, &pdev->dev);
free_dca_provider(device->dca);
device->dca = NULL;
}
ioat_dma_remove(device);
}
static int __init ioat_init_module(void)
{
int err = -ENOMEM;
pr_info("%s: Intel(R) QuickData Technology Driver %s\n",
DRV_NAME, IOAT_DMA_VERSION);
ioat2_cache = kmem_cache_create("ioat2", sizeof(struct ioat_ring_ent),
0, SLAB_HWCACHE_ALIGN, NULL);
if (!ioat2_cache)
return -ENOMEM;
ioat3_sed_cache = KMEM_CACHE(ioat_sed_ent, 0);
if (!ioat3_sed_cache)
goto err_ioat2_cache;
err = pci_register_driver(&ioat_pci_driver);
if (err)
goto err_ioat3_cache;
return 0;
err_ioat3_cache:
kmem_cache_destroy(ioat3_sed_cache);
err_ioat2_cache:
kmem_cache_destroy(ioat2_cache);
return err;
}
module_init(ioat_init_module);
static void __exit ioat_exit_module(void)
{
pci_unregister_driver(&ioat_pci_driver);
kmem_cache_destroy(ioat2_cache);
}
module_exit(ioat_exit_module);
/*
* Intel I/OAT DMA Linux driver
* Copyright(c) 2004 - 2015 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/dmaengine.h>
#include <linux/pci.h>
#include "dma.h"
#include "registers.h"
#include "hw.h"
#include "../dmaengine.h"
static ssize_t cap_show(struct dma_chan *c, char *page)
{
struct dma_device *dma = c->device;
return sprintf(page, "copy%s%s%s%s%s\n",
dma_has_cap(DMA_PQ, dma->cap_mask) ? " pq" : "",
dma_has_cap(DMA_PQ_VAL, dma->cap_mask) ? " pq_val" : "",
dma_has_cap(DMA_XOR, dma->cap_mask) ? " xor" : "",
dma_has_cap(DMA_XOR_VAL, dma->cap_mask) ? " xor_val" : "",
dma_has_cap(DMA_INTERRUPT, dma->cap_mask) ? " intr" : "");
}
struct ioat_sysfs_entry ioat_cap_attr = __ATTR_RO(cap);
static ssize_t version_show(struct dma_chan *c, char *page)
{
struct dma_device *dma = c->device;
struct ioatdma_device *ioat_dma = to_ioatdma_device(dma);
return sprintf(page, "%d.%d\n",
ioat_dma->version >> 4, ioat_dma->version & 0xf);
}
struct ioat_sysfs_entry ioat_version_attr = __ATTR_RO(version);
static ssize_t
ioat_attr_show(struct kobject *kobj, struct attribute *attr, char *page)
{
struct ioat_sysfs_entry *entry;
struct ioatdma_chan *ioat_chan;
entry = container_of(attr, struct ioat_sysfs_entry, attr);
ioat_chan = container_of(kobj, struct ioatdma_chan, kobj);
if (!entry->show)
return -EIO;
return entry->show(&ioat_chan->dma_chan, page);
}
const struct sysfs_ops ioat_sysfs_ops = {
.show = ioat_attr_show,
};
void ioat_kobject_add(struct ioatdma_device *ioat_dma, struct kobj_type *type)
{
struct dma_device *dma = &ioat_dma->dma_dev;
struct dma_chan *c;
list_for_each_entry(c, &dma->channels, device_node) {
struct ioatdma_chan *ioat_chan = to_ioat_chan(c);
struct kobject *parent = &c->dev->device.kobj;
int err;
err = kobject_init_and_add(&ioat_chan->kobj, type,
parent, "quickdata");
if (err) {
dev_warn(to_dev(ioat_chan),
"sysfs init error (%d), continuing...\n", err);
kobject_put(&ioat_chan->kobj);
set_bit(IOAT_KOBJ_INIT_FAIL, &ioat_chan->state);
}
}
}
void ioat_kobject_del(struct ioatdma_device *ioat_dma)
{
struct dma_device *dma = &ioat_dma->dma_dev;
struct dma_chan *c;
list_for_each_entry(c, &dma->channels, device_node) {
struct ioatdma_chan *ioat_chan = to_ioat_chan(c);
if (!test_bit(IOAT_KOBJ_INIT_FAIL, &ioat_chan->state)) {
kobject_del(&ioat_chan->kobj);
kobject_put(&ioat_chan->kobj);
}
}
}
static ssize_t ring_size_show(struct dma_chan *c, char *page)
{
struct ioatdma_chan *ioat_chan = to_ioat_chan(c);
return sprintf(page, "%d\n", (1 << ioat_chan->alloc_order) & ~1);
}
static struct ioat_sysfs_entry ring_size_attr = __ATTR_RO(ring_size);
static ssize_t ring_active_show(struct dma_chan *c, char *page)
{
struct ioatdma_chan *ioat_chan = to_ioat_chan(c);
/* ...taken outside the lock, no need to be precise */
return sprintf(page, "%d\n", ioat_ring_active(ioat_chan));
}
static struct ioat_sysfs_entry ring_active_attr = __ATTR_RO(ring_active);
static struct attribute *ioat_attrs[] = {
&ring_size_attr.attr,
&ring_active_attr.attr,
&ioat_cap_attr.attr,
&ioat_version_attr.attr,
NULL,
};
struct kobj_type ioat_ktype = {
.sysfs_ops = &ioat_sysfs_ops,
.default_attrs = ioat_attrs,
};
This diff is collapsed.
...@@ -24,7 +24,6 @@ ...@@ -24,7 +24,6 @@
#include "virt-dma.h" #include "virt-dma.h"
#define DRIVER_NAME "k3-dma" #define DRIVER_NAME "k3-dma"
#define DMA_ALIGN 3
#define DMA_MAX_SIZE 0x1ffc #define DMA_MAX_SIZE 0x1ffc
#define INT_STAT 0x00 #define INT_STAT 0x00
...@@ -732,7 +731,7 @@ static int k3_dma_probe(struct platform_device *op) ...@@ -732,7 +731,7 @@ static int k3_dma_probe(struct platform_device *op)
d->slave.device_pause = k3_dma_transfer_pause; d->slave.device_pause = k3_dma_transfer_pause;
d->slave.device_resume = k3_dma_transfer_resume; d->slave.device_resume = k3_dma_transfer_resume;
d->slave.device_terminate_all = k3_dma_terminate_all; d->slave.device_terminate_all = k3_dma_terminate_all;
d->slave.copy_align = DMA_ALIGN; d->slave.copy_align = DMAENGINE_ALIGN_8_BYTES;
/* init virtual channel */ /* init virtual channel */
d->chans = devm_kzalloc(&op->dev, d->chans = devm_kzalloc(&op->dev,
......
This diff is collapsed.
...@@ -39,7 +39,7 @@ ...@@ -39,7 +39,7 @@
*/ */
#define MIC_DMA_MAX_NUM_CHAN 8 #define MIC_DMA_MAX_NUM_CHAN 8
#define MIC_DMA_NUM_CHAN 4 #define MIC_DMA_NUM_CHAN 4
#define MIC_DMA_ALIGN_SHIFT 6 #define MIC_DMA_ALIGN_SHIFT DMAENGINE_ALIGN_64_BYTES
#define MIC_DMA_ALIGN_BYTES (1 << MIC_DMA_ALIGN_SHIFT) #define MIC_DMA_ALIGN_BYTES (1 << MIC_DMA_ALIGN_SHIFT)
#define MIC_DMA_DESC_RX_SIZE (128 * 1024 - 4) #define MIC_DMA_DESC_RX_SIZE (128 * 1024 - 4)
......
...@@ -72,7 +72,6 @@ ...@@ -72,7 +72,6 @@
#define DCMD_WIDTH4 (3 << 14) /* 4 byte width (Word) */ #define DCMD_WIDTH4 (3 << 14) /* 4 byte width (Word) */
#define DCMD_LENGTH 0x01fff /* length mask (max = 8K - 1) */ #define DCMD_LENGTH 0x01fff /* length mask (max = 8K - 1) */
#define PDMA_ALIGNMENT 3
#define PDMA_MAX_DESC_BYTES DCMD_LENGTH #define PDMA_MAX_DESC_BYTES DCMD_LENGTH
struct mmp_pdma_desc_hw { struct mmp_pdma_desc_hw {
...@@ -1071,7 +1070,7 @@ static int mmp_pdma_probe(struct platform_device *op) ...@@ -1071,7 +1070,7 @@ static int mmp_pdma_probe(struct platform_device *op)
pdev->device.device_issue_pending = mmp_pdma_issue_pending; pdev->device.device_issue_pending = mmp_pdma_issue_pending;
pdev->device.device_config = mmp_pdma_config; pdev->device.device_config = mmp_pdma_config;
pdev->device.device_terminate_all = mmp_pdma_terminate_all; pdev->device.device_terminate_all = mmp_pdma_terminate_all;
pdev->device.copy_align = PDMA_ALIGNMENT; pdev->device.copy_align = DMAENGINE_ALIGN_8_BYTES;
pdev->device.src_addr_widths = widths; pdev->device.src_addr_widths = widths;
pdev->device.dst_addr_widths = widths; pdev->device.dst_addr_widths = widths;
pdev->device.directions = BIT(DMA_MEM_TO_DEV) | BIT(DMA_DEV_TO_MEM); pdev->device.directions = BIT(DMA_MEM_TO_DEV) | BIT(DMA_DEV_TO_MEM);
......
...@@ -100,7 +100,6 @@ enum mmp_tdma_type { ...@@ -100,7 +100,6 @@ enum mmp_tdma_type {
PXA910_SQU, PXA910_SQU,
}; };
#define TDMA_ALIGNMENT 3
#define TDMA_MAX_XFER_BYTES SZ_64K #define TDMA_MAX_XFER_BYTES SZ_64K
struct mmp_tdma_chan { struct mmp_tdma_chan {
...@@ -695,7 +694,7 @@ static int mmp_tdma_probe(struct platform_device *pdev) ...@@ -695,7 +694,7 @@ static int mmp_tdma_probe(struct platform_device *pdev)
tdev->device.device_pause = mmp_tdma_pause_chan; tdev->device.device_pause = mmp_tdma_pause_chan;
tdev->device.device_resume = mmp_tdma_resume_chan; tdev->device.device_resume = mmp_tdma_resume_chan;
tdev->device.device_terminate_all = mmp_tdma_terminate_all; tdev->device.device_terminate_all = mmp_tdma_terminate_all;
tdev->device.copy_align = TDMA_ALIGNMENT; tdev->device.copy_align = DMAENGINE_ALIGN_8_BYTES;
dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); dma_set_mask(&pdev->dev, DMA_BIT_MASK(64));
platform_set_drvdata(pdev, tdev); platform_set_drvdata(pdev, tdev);
......
This diff is collapsed.
...@@ -11,10 +11,6 @@ ...@@ -11,10 +11,6 @@
* but WITHOUT ANY WARRANTY; without even the implied warranty of * but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details. * GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/ */
#include <linux/dmaengine.h> #include <linux/dmaengine.h>
......
...@@ -1198,6 +1198,9 @@ static inline int _loop(unsigned dry_run, u8 buf[], ...@@ -1198,6 +1198,9 @@ static inline int _loop(unsigned dry_run, u8 buf[],
unsigned lcnt0, lcnt1, ljmp0, ljmp1; unsigned lcnt0, lcnt1, ljmp0, ljmp1;
struct _arg_LPEND lpend; struct _arg_LPEND lpend;
if (*bursts == 1)
return _bursts(dry_run, buf, pxs, 1);
/* Max iterations possible in DMALP is 256 */ /* Max iterations possible in DMALP is 256 */
if (*bursts >= 256*256) { if (*bursts >= 256*256) {
lcnt1 = 256; lcnt1 = 256;
......
This diff is collapsed.
This diff is collapsed.
...@@ -13,7 +13,7 @@ shdma-$(CONFIG_SH_DMAE_R8A73A4) += shdma-r8a73a4.o ...@@ -13,7 +13,7 @@ shdma-$(CONFIG_SH_DMAE_R8A73A4) += shdma-r8a73a4.o
shdma-objs := $(shdma-y) shdma-objs := $(shdma-y)
obj-$(CONFIG_SH_DMAE) += shdma.o obj-$(CONFIG_SH_DMAE) += shdma.o
obj-$(CONFIG_SUDMAC) += sudmac.o
obj-$(CONFIG_RCAR_HPB_DMAE) += rcar-hpbdma.o
obj-$(CONFIG_RCAR_DMAC) += rcar-dmac.o obj-$(CONFIG_RCAR_DMAC) += rcar-dmac.o
obj-$(CONFIG_RCAR_HPB_DMAE) += rcar-hpbdma.o
obj-$(CONFIG_RENESAS_USB_DMAC) += usb-dmac.o obj-$(CONFIG_RENESAS_USB_DMAC) += usb-dmac.o
obj-$(CONFIG_SUDMAC) += sudmac.o
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment