Commit 4cb865de authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx

* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx: (33 commits)
  x86: poll waiting for I/OAT DMA channel status
  maintainers: add dma engine tree details
  dmaengine: add TODO items for future work on dma drivers
  dmaengine: Add API documentation for slave dma usage
  dmaengine/dw_dmac: Update maintainer-ship
  dmaengine: move link order
  dmaengine/dw_dmac: implement pause and resume in dwc_control
  dmaengine/dw_dmac: Replace spin_lock* with irqsave variants and enable submission from callback
  dmaengine/dw_dmac: Divide one sg to many desc, if sg len is greater than DWC_MAX_COUNT
  dmaengine/dw_dmac: set residue as total len in dwc_tx_status if status is !DMA_SUCCESS
  dmaengine/dw_dmac: don't call callback routine in case dmaengine_terminate_all() is called
  dmaengine: at_hdmac: pause: no need to wait for FIFO empty
  pch_dma: modify pci device table definition
  pch_dma: Support new device ML7223 IOH
  pch_dma: Support I2S for ML7213 IOH
  pch_dma: Fix DMA setting issue
  pch_dma: modify for checkpatch
  pch_dma: fix dma direction issue for ML7213 IOH video-in
  dmaengine: at_hdmac: use descriptor chaining help function
  dmaengine: at_hdmac: implement pause and resume in atc_control
  ...

Fix up trivial conflict in drivers/dma/dw_dmac.c
parents 55f08e1b 19d78a61
See Documentation/crypto/async-tx-api.txt
DMA Engine API Guide
====================
Vinod Koul <vinod dot koul at intel.com>
NOTE: For DMA Engine usage in async_tx please see:
Documentation/crypto/async-tx-api.txt
Below is a guide to device driver writers on how to use the Slave-DMA API of the
DMA Engine. This is applicable only for slave DMA usage only.
The slave DMA usage consists of following steps
1. Allocate a DMA slave channel
2. Set slave and controller specific parameters
3. Get a descriptor for transaction
4. Submit the transaction and wait for callback notification
1. Allocate a DMA slave channel
Channel allocation is slightly different in the slave DMA context, client
drivers typically need a channel from a particular DMA controller only and even
in some cases a specific channel is desired. To request a channel
dma_request_channel() API is used.
Interface:
struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
dma_filter_fn filter_fn,
void *filter_param);
where dma_filter_fn is defined as:
typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
When the optional 'filter_fn' parameter is set to NULL dma_request_channel
simply returns the first channel that satisfies the capability mask. Otherwise,
when the mask parameter is insufficient for specifying the necessary channel,
the filter_fn routine can be used to disposition the available channels in the
system. The filter_fn routine is called once for each free channel in the
system. Upon seeing a suitable channel filter_fn returns DMA_ACK which flags
that channel to be the return value from dma_request_channel. A channel
allocated via this interface is exclusive to the caller, until
dma_release_channel() is called.
2. Set slave and controller specific parameters
Next step is always to pass some specific information to the DMA driver. Most of
the generic information which a slave DMA can use is in struct dma_slave_config.
It allows the clients to specify DMA direction, DMA addresses, bus widths, DMA
burst lengths etc. If some DMA controllers have more parameters to be sent then
they should try to embed struct dma_slave_config in their controller specific
structure. That gives flexibility to client to pass more parameters, if
required.
Interface:
int dmaengine_slave_config(struct dma_chan *chan,
struct dma_slave_config *config)
3. Get a descriptor for transaction
For slave usage the various modes of slave transfers supported by the
DMA-engine are:
slave_sg - DMA a list of scatter gather buffers from/to a peripheral
dma_cyclic - Perform a cyclic DMA operation from/to a peripheral till the
operation is explicitly stopped.
The non NULL return of this transfer API represents a "descriptor" for the given
transaction.
Interface:
struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_sg)(
struct dma_chan *chan,
struct scatterlist *dst_sg, unsigned int dst_nents,
struct scatterlist *src_sg, unsigned int src_nents,
unsigned long flags);
struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)(
struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
size_t period_len, enum dma_data_direction direction);
4. Submit the transaction and wait for callback notification
To schedule the transaction to be scheduled by dma device, the "descriptor"
returned in above (3) needs to be submitted.
To tell the dma driver that a transaction is ready to be serviced, the
descriptor->submit() callback needs to be invoked. This chains the descriptor to
the pending queue.
The transactions in the pending queue can be activated by calling the
issue_pending API. If channel is idle then the first transaction in queue is
started and subsequent ones queued up.
On completion of the DMA operation the next in queue is submitted and a tasklet
triggered. The tasklet would then call the client driver completion callback
routine for notification, if set.
Interface:
void dma_async_issue_pending(struct dma_chan *chan);
==============================================================================
Additional usage notes for dma driver writers
1/ Although DMA engine specifies that completion callback routines cannot submit
any new operations, but typically for slave DMA subsequent transaction may not
be available for submit prior to callback routine being called. This requirement
is not a requirement for DMA-slave devices. But they should take care to drop
the spin-lock they might be holding before calling the callback routine
......@@ -2178,6 +2178,8 @@ M: Dan Williams <dan.j.williams@intel.com>
S: Supported
F: drivers/dma/
F: include/linux/dma*
T: git git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx.git
T: git git://git.infradead.org/users/vkoul/slave-dma.git (slave-dma)
DME1737 HARDWARE MONITOR DRIVER
M: Juerg Haefliger <juergh@gmail.com>
......@@ -5451,6 +5453,13 @@ L: linux-serial@vger.kernel.org
S: Maintained
F: drivers/tty/serial
SYNOPSYS DESIGNWARE DMAC DRIVER
M: Viresh Kumar <viresh.kumar@st.com>
S: Maintained
F: include/linux/dw_dmac.h
F: drivers/dma/dw_dmac_regs.h
F: drivers/dma/dw_dmac.c
TIMEKEEPING, NTP
M: John Stultz <johnstul@us.ibm.com>
M: Thomas Gleixner <tglx@linutronix.de>
......
......@@ -17,6 +17,9 @@ obj-$(CONFIG_SFI) += sfi/
# was used and do nothing if so
obj-$(CONFIG_PNP) += pnp/
obj-$(CONFIG_ARM_AMBA) += amba/
# Many drivers will want to use DMA so this has to be made available
# really early.
obj-$(CONFIG_DMA_ENGINE) += dma/
obj-$(CONFIG_VIRTIO) += virtio/
obj-$(CONFIG_XEN) += xen/
......@@ -92,7 +95,6 @@ obj-$(CONFIG_EISA) += eisa/
obj-y += lguest/
obj-$(CONFIG_CPU_FREQ) += cpufreq/
obj-$(CONFIG_CPU_IDLE) += cpuidle/
obj-$(CONFIG_DMA_ENGINE) += dma/
obj-$(CONFIG_MMC) += mmc/
obj-$(CONFIG_MEMSTICK) += memstick/
obj-y += leds/
......
......@@ -200,16 +200,18 @@ config PL330_DMA
platform_data for a dma-pl330 device.
config PCH_DMA
tristate "Intel EG20T PCH / OKI SEMICONDUCTOR ML7213 IOH DMA support"
tristate "Intel EG20T PCH / OKI Semi IOH(ML7213/ML7223) DMA support"
depends on PCI && X86
select DMA_ENGINE
help
Enable support for Intel EG20T PCH DMA engine.
This driver also can be used for OKI SEMICONDUCTOR ML7213 IOH(Input/
Output Hub) which is for IVI(In-Vehicle Infotainment) use.
ML7213 is companion chip for Intel Atom E6xx series.
ML7213 is completely compatible for Intel EG20T PCH.
This driver also can be used for OKI SEMICONDUCTOR IOH(Input/
Output Hub), ML7213 and ML7223.
ML7213 IOH is for IVI(In-Vehicle Infotainment) use and ML7223 IOH is
for MP(Media Phone) use.
ML7213/ML7223 is companion chip for Intel Atom E6xx series.
ML7213/ML7223 is completely compatible for Intel EG20T PCH.
config IMX_SDMA
tristate "i.MX SDMA support"
......
TODO for slave dma
1. Move remaining drivers to use new slave interface
2. Remove old slave pointer machansim
3. Make issue_pending to start the transaction in below drivers
- mpc512x_dma
- imx-dma
- imx-sdma
- mxs-dma.c
- dw_dmac
- intel_mid_dma
- ste_dma40
4. Check other subsystems for dma drivers and merge/move to dmaengine
5. Remove dma_slave_config's dma direction.
This diff is collapsed.
......@@ -103,6 +103,10 @@
/* Bitfields in CTRLB */
#define ATC_SIF(i) (0x3 & (i)) /* Src tx done via AHB-Lite Interface i */
#define ATC_DIF(i) ((0x3 & (i)) << 4) /* Dst tx done via AHB-Lite Interface i */
/* Specify AHB interfaces */
#define AT_DMA_MEM_IF 0 /* interface 0 as memory interface */
#define AT_DMA_PER_IF 1 /* interface 1 as peripheral interface */
#define ATC_SRC_PIP (0x1 << 8) /* Source Picture-in-Picture enabled */
#define ATC_DST_PIP (0x1 << 12) /* Destination Picture-in-Picture enabled */
#define ATC_SRC_DSCR_DIS (0x1 << 16) /* Src Descriptor fetch disable */
......@@ -180,13 +184,24 @@ txd_to_at_desc(struct dma_async_tx_descriptor *txd)
/*-- Channels --------------------------------------------------------*/
/**
* atc_status - information bits stored in channel status flag
*
* Manipulated with atomic operations.
*/
enum atc_status {
ATC_IS_ERROR = 0,
ATC_IS_PAUSED = 1,
ATC_IS_CYCLIC = 24,
};
/**
* struct at_dma_chan - internal representation of an Atmel HDMAC channel
* @chan_common: common dmaengine channel object members
* @device: parent device
* @ch_regs: memory mapped register base
* @mask: channel index in a mask
* @error_status: transmit error status information from irq handler
* @status: transmit status information from irq/prep* functions
* to tasklet (use atomic operations)
* @tasklet: bottom half to finish transaction work
* @lock: serializes enqueue/dequeue operations to descriptors lists
......@@ -201,7 +216,7 @@ struct at_dma_chan {
struct at_dma *device;
void __iomem *ch_regs;
u8 mask;
unsigned long error_status;
unsigned long status;
struct tasklet_struct tasklet;
spinlock_t lock;
......@@ -309,8 +324,8 @@ static void atc_setup_irq(struct at_dma_chan *atchan, int on)
struct at_dma *atdma = to_at_dma(atchan->chan_common.device);
u32 ebci;
/* enable interrupts on buffer chain completion & error */
ebci = AT_DMA_CBTC(atchan->chan_common.chan_id)
/* enable interrupts on buffer transfer completion & error */
ebci = AT_DMA_BTC(atchan->chan_common.chan_id)
| AT_DMA_ERR(atchan->chan_common.chan_id);
if (on)
dma_writel(atdma, EBCIER, ebci);
......@@ -347,7 +362,12 @@ static inline int atc_chan_is_enabled(struct at_dma_chan *atchan)
*/
static void set_desc_eol(struct at_desc *desc)
{
desc->lli.ctrlb |= ATC_SRC_DSCR_DIS | ATC_DST_DSCR_DIS;
u32 ctrlb = desc->lli.ctrlb;
ctrlb &= ~ATC_IEN;
ctrlb |= ATC_SRC_DSCR_DIS | ATC_DST_DSCR_DIS;
desc->lli.ctrlb = ctrlb;
desc->lli.dscr = 0;
}
......
......@@ -1610,7 +1610,7 @@ int __init coh901318_init(void)
{
return platform_driver_probe(&coh901318_driver, coh901318_probe);
}
arch_initcall(coh901318_init);
subsys_initcall(coh901318_init);
void __exit coh901318_exit(void)
{
......
This diff is collapsed.
......@@ -2,6 +2,7 @@
* Driver for the Synopsys DesignWare AHB DMA Controller
*
* Copyright (C) 2005-2007 Atmel Corporation
* Copyright (C) 2010-2011 ST Microelectronics
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
......@@ -138,6 +139,7 @@ struct dw_dma_chan {
void __iomem *ch_regs;
u8 mask;
u8 priority;
bool paused;
spinlock_t lock;
......
......@@ -1292,8 +1292,7 @@ static int __devinit intel_mid_dma_probe(struct pci_dev *pdev,
if (err)
goto err_dma;
pm_runtime_set_active(&pdev->dev);
pm_runtime_enable(&pdev->dev);
pm_runtime_put_noidle(&pdev->dev);
pm_runtime_allow(&pdev->dev);
return 0;
......@@ -1322,6 +1321,9 @@ static int __devinit intel_mid_dma_probe(struct pci_dev *pdev,
static void __devexit intel_mid_dma_remove(struct pci_dev *pdev)
{
struct middma_device *device = pci_get_drvdata(pdev);
pm_runtime_get_noresume(&pdev->dev);
pm_runtime_forbid(&pdev->dev);
middma_shutdown(pdev);
pci_dev_put(pdev);
kfree(device);
......@@ -1385,13 +1387,20 @@ int dma_resume(struct pci_dev *pci)
static int dma_runtime_suspend(struct device *dev)
{
struct pci_dev *pci_dev = to_pci_dev(dev);
return dma_suspend(pci_dev, PMSG_SUSPEND);
struct middma_device *device = pci_get_drvdata(pci_dev);
device->state = SUSPENDED;
return 0;
}
static int dma_runtime_resume(struct device *dev)
{
struct pci_dev *pci_dev = to_pci_dev(dev);
return dma_resume(pci_dev);
struct middma_device *device = pci_get_drvdata(pci_dev);
device->state = RUNNING;
iowrite32(REG_BIT0, device->dma_base + DMA_CFG);
return 0;
}
static int dma_runtime_idle(struct device *dev)
......
......@@ -508,6 +508,7 @@ int ioat2_alloc_chan_resources(struct dma_chan *c)
struct ioat_ring_ent **ring;
u64 status;
int order;
int i = 0;
/* have we already been set up? */
if (ioat->ring)
......@@ -548,8 +549,11 @@ int ioat2_alloc_chan_resources(struct dma_chan *c)
ioat2_start_null_desc(ioat);
/* check that we got off the ground */
udelay(5);
do {
udelay(1);
status = ioat_chansts(chan);
} while (i++ < 20 && !is_ioat_active(status) && !is_ioat_idle(status));
if (is_ioat_active(status) || is_ioat_idle(status)) {
set_bit(IOAT_RUN, &chan->state);
return 1 << ioat->alloc_order;
......
......@@ -619,7 +619,7 @@ iop_adma_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dma_dest,
if (unlikely(!len))
return NULL;
BUG_ON(unlikely(len > IOP_ADMA_MAX_BYTE_COUNT));
BUG_ON(len > IOP_ADMA_MAX_BYTE_COUNT);
dev_dbg(iop_chan->device->common.dev, "%s len: %u\n",
__func__, len);
......@@ -652,7 +652,7 @@ iop_adma_prep_dma_memset(struct dma_chan *chan, dma_addr_t dma_dest,
if (unlikely(!len))
return NULL;
BUG_ON(unlikely(len > IOP_ADMA_MAX_BYTE_COUNT));
BUG_ON(len > IOP_ADMA_MAX_BYTE_COUNT);
dev_dbg(iop_chan->device->common.dev, "%s len: %u\n",
__func__, len);
......@@ -686,7 +686,7 @@ iop_adma_prep_dma_xor(struct dma_chan *chan, dma_addr_t dma_dest,
if (unlikely(!len))
return NULL;
BUG_ON(unlikely(len > IOP_ADMA_XOR_MAX_BYTE_COUNT));
BUG_ON(len > IOP_ADMA_XOR_MAX_BYTE_COUNT);
dev_dbg(iop_chan->device->common.dev,
"%s src_cnt: %d len: %u flags: %lx\n",
......
......@@ -671,7 +671,7 @@ mv_xor_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
if (unlikely(len < MV_XOR_MIN_BYTE_COUNT))
return NULL;
BUG_ON(unlikely(len > MV_XOR_MAX_BYTE_COUNT));
BUG_ON(len > MV_XOR_MAX_BYTE_COUNT);
spin_lock_bh(&mv_chan->lock);
slot_cnt = mv_chan_memcpy_slot_count(len);
......@@ -710,7 +710,7 @@ mv_xor_prep_dma_memset(struct dma_chan *chan, dma_addr_t dest, int value,
if (unlikely(len < MV_XOR_MIN_BYTE_COUNT))
return NULL;
BUG_ON(unlikely(len > MV_XOR_MAX_BYTE_COUNT));
BUG_ON(len > MV_XOR_MAX_BYTE_COUNT);
spin_lock_bh(&mv_chan->lock);
slot_cnt = mv_chan_memset_slot_count(len);
......@@ -744,7 +744,7 @@ mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
if (unlikely(len < MV_XOR_MIN_BYTE_COUNT))
return NULL;
BUG_ON(unlikely(len > MV_XOR_MAX_BYTE_COUNT));
BUG_ON(len > MV_XOR_MAX_BYTE_COUNT);
dev_dbg(mv_chan->device->common.dev,
"%s src_cnt: %d len: dest %x %u flags: %ld\n",
......
......@@ -77,10 +77,10 @@ struct pch_dma_regs {
u32 dma_ctl0;
u32 dma_ctl1;
u32 dma_ctl2;
u32 reserved1;
u32 dma_ctl3;
u32 dma_sts0;
u32 dma_sts1;
u32 reserved2;
u32 dma_sts2;
u32 reserved3;
struct pch_dma_desc_regs desc[MAX_CHAN_NR];
};
......@@ -130,6 +130,7 @@ struct pch_dma {
#define PCH_DMA_CTL0 0x00
#define PCH_DMA_CTL1 0x04
#define PCH_DMA_CTL2 0x08
#define PCH_DMA_CTL3 0x0C
#define PCH_DMA_STS0 0x10
#define PCH_DMA_STS1 0x14
......@@ -138,7 +139,8 @@ struct pch_dma {
#define dma_writel(pd, name, val) \
writel((val), (pd)->membase + PCH_DMA_##name)
static inline struct pch_dma_desc *to_pd_desc(struct dma_async_tx_descriptor *txd)
static inline
struct pch_dma_desc *to_pd_desc(struct dma_async_tx_descriptor *txd)
{
return container_of(txd, struct pch_dma_desc, txd);
}
......@@ -163,13 +165,15 @@ static inline struct device *chan2parent(struct dma_chan *chan)
return chan->dev->device.parent;
}
static inline struct pch_dma_desc *pdc_first_active(struct pch_dma_chan *pd_chan)
static inline
struct pch_dma_desc *pdc_first_active(struct pch_dma_chan *pd_chan)
{
return list_first_entry(&pd_chan->active_list,
struct pch_dma_desc, desc_node);
}
static inline struct pch_dma_desc *pdc_first_queued(struct pch_dma_chan *pd_chan)
static inline
struct pch_dma_desc *pdc_first_queued(struct pch_dma_chan *pd_chan)
{
return list_first_entry(&pd_chan->queue,
struct pch_dma_desc, desc_node);
......@@ -199,6 +203,7 @@ static void pdc_set_dir(struct dma_chan *chan)
struct pch_dma *pd = to_pd(chan->device);
u32 val;
if (chan->chan_id < 8) {
val = dma_readl(pd, CTL0);
if (pd_chan->dir == DMA_TO_DEVICE)
......@@ -209,6 +214,19 @@ static void pdc_set_dir(struct dma_chan *chan)
DMA_CTL0_DIR_SHIFT_BITS));
dma_writel(pd, CTL0, val);
} else {
int ch = chan->chan_id - 8; /* ch8-->0 ch9-->1 ... ch11->3 */
val = dma_readl(pd, CTL3);
if (pd_chan->dir == DMA_TO_DEVICE)
val |= 0x1 << (DMA_CTL0_BITS_PER_CH * ch +
DMA_CTL0_DIR_SHIFT_BITS);
else
val &= ~(0x1 << (DMA_CTL0_BITS_PER_CH * ch +
DMA_CTL0_DIR_SHIFT_BITS));
dma_writel(pd, CTL3, val);
}
dev_dbg(chan2dev(chan), "pdc_set_dir: chan %d -> %x\n",
chan->chan_id, val);
......@@ -219,6 +237,7 @@ static void pdc_set_mode(struct dma_chan *chan, u32 mode)
struct pch_dma *pd = to_pd(chan->device);
u32 val;
if (chan->chan_id < 8) {
val = dma_readl(pd, CTL0);
val &= ~(DMA_CTL0_MODE_MASK_BITS <<
......@@ -226,6 +245,18 @@ static void pdc_set_mode(struct dma_chan *chan, u32 mode)
val |= mode << (DMA_CTL0_BITS_PER_CH * chan->chan_id);
dma_writel(pd, CTL0, val);
} else {
int ch = chan->chan_id - 8; /* ch8-->0 ch9-->1 ... ch11->3 */
val = dma_readl(pd, CTL3);
val &= ~(DMA_CTL0_MODE_MASK_BITS <<
(DMA_CTL0_BITS_PER_CH * ch));
val |= mode << (DMA_CTL0_BITS_PER_CH * ch);
dma_writel(pd, CTL3, val);
}
dev_dbg(chan2dev(chan), "pdc_set_mode: chan %d -> %x\n",
chan->chan_id, val);
......@@ -251,9 +282,6 @@ static bool pdc_is_idle(struct pch_dma_chan *pd_chan)
static void pdc_dostart(struct pch_dma_chan *pd_chan, struct pch_dma_desc* desc)
{
struct pch_dma *pd = to_pd(pd_chan->chan.device);
u32 val;
if (!pdc_is_idle(pd_chan)) {
dev_err(chan2dev(&pd_chan->chan),
"BUG: Attempt to start non-idle channel\n");
......@@ -279,10 +307,6 @@ static void pdc_dostart(struct pch_dma_chan *pd_chan, struct pch_dma_desc* desc)
channel_writel(pd_chan, NEXT, desc->txd.phys);
pdc_set_mode(&pd_chan->chan, DMA_CTL0_SG);
}
val = dma_readl(pd, CTL2);
val |= 1 << (DMA_CTL2_START_SHIFT_BITS + pd_chan->chan.chan_id);
dma_writel(pd, CTL2, val);
}
static void pdc_chain_complete(struct pch_dma_chan *pd_chan,
......@@ -403,7 +427,7 @@ static struct pch_dma_desc *pdc_desc_get(struct pch_dma_chan *pd_chan)
{
struct pch_dma_desc *desc, *_d;
struct pch_dma_desc *ret = NULL;
int i;
int i = 0;
spin_lock(&pd_chan->lock);
list_for_each_entry_safe(desc, _d, &pd_chan->free_list, desc_node) {
......@@ -478,7 +502,6 @@ static int pd_alloc_chan_resources(struct dma_chan *chan)
spin_unlock_bh(&pd_chan->lock);
pdc_enable_irq(chan, 1);
pdc_set_dir(chan);
return pd_chan->descs_allocated;
}
......@@ -561,6 +584,9 @@ static struct dma_async_tx_descriptor *pd_prep_slave_sg(struct dma_chan *chan,
else
return NULL;
pd_chan->dir = direction;
pdc_set_dir(chan);
for_each_sg(sgl, sg, sg_len, i) {
desc = pdc_desc_get(pd_chan);
......@@ -703,6 +729,7 @@ static void pch_dma_save_regs(struct pch_dma *pd)
pd->regs.dma_ctl0 = dma_readl(pd, CTL0);
pd->regs.dma_ctl1 = dma_readl(pd, CTL1);
pd->regs.dma_ctl2 = dma_readl(pd, CTL2);
pd->regs.dma_ctl3 = dma_readl(pd, CTL3);
list_for_each_entry_safe(chan, _c, &pd->dma.channels, device_node) {
pd_chan = to_pd_chan(chan);
......@@ -725,6 +752,7 @@ static void pch_dma_restore_regs(struct pch_dma *pd)
dma_writel(pd, CTL0, pd->regs.dma_ctl0);
dma_writel(pd, CTL1, pd->regs.dma_ctl1);
dma_writel(pd, CTL2, pd->regs.dma_ctl2);
dma_writel(pd, CTL3, pd->regs.dma_ctl3);
list_for_each_entry_safe(chan, _c, &pd->dma.channels, device_node) {
pd_chan = to_pd_chan(chan);
......@@ -850,8 +878,6 @@ static int __devinit pch_dma_probe(struct pci_dev *pdev,
pd_chan->membase = &regs->desc[i];
pd_chan->dir = (i % 2) ? DMA_FROM_DEVICE : DMA_TO_DEVICE;
spin_lock_init(&pd_chan->lock);
INIT_LIST_HEAD(&pd_chan->active_list);
......@@ -929,13 +955,23 @@ static void __devexit pch_dma_remove(struct pci_dev *pdev)
#define PCI_DEVICE_ID_ML7213_DMA1_8CH 0x8026
#define PCI_DEVICE_ID_ML7213_DMA2_8CH 0x802B
#define PCI_DEVICE_ID_ML7213_DMA3_4CH 0x8034
#define PCI_DEVICE_ID_ML7213_DMA4_12CH 0x8032
#define PCI_DEVICE_ID_ML7223_DMA1_4CH 0x800B
#define PCI_DEVICE_ID_ML7223_DMA2_4CH 0x800E
#define PCI_DEVICE_ID_ML7223_DMA3_4CH 0x8017
#define PCI_DEVICE_ID_ML7223_DMA4_4CH 0x803B
static const struct pci_device_id pch_dma_id_table[] = {
DEFINE_PCI_DEVICE_TABLE(pch_dma_id_table) = {
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_EG20T_PCH_DMA_8CH), 8 },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_EG20T_PCH_DMA_4CH), 4 },
{ PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7213_DMA1_8CH), 8}, /* UART Video */
{ PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7213_DMA2_8CH), 8}, /* PCMIF SPI */
{ PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7213_DMA3_4CH), 4}, /* FPGA */
{ PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7213_DMA4_12CH), 12}, /* I2S */
{ PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7223_DMA1_4CH), 4}, /* UART */
{ PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7223_DMA2_4CH), 4}, /* Video SPI */
{ PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7223_DMA3_4CH), 4}, /* Security */
{ PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7223_DMA4_4CH), 4}, /* FPGA */
{ 0, },
};
......
......@@ -2313,7 +2313,7 @@ static struct dma_async_tx_descriptor *ppc440spe_adma_prep_dma_memcpy(
if (unlikely(!len))
return NULL;
BUG_ON(unlikely(len > PPC440SPE_ADMA_DMA_MAX_BYTE_COUNT));
BUG_ON(len > PPC440SPE_ADMA_DMA_MAX_BYTE_COUNT);
spin_lock_bh(&ppc440spe_chan->lock);
......@@ -2354,7 +2354,7 @@ static struct dma_async_tx_descriptor *ppc440spe_adma_prep_dma_memset(
if (unlikely(!len))
return NULL;
BUG_ON(unlikely(len > PPC440SPE_ADMA_DMA_MAX_BYTE_COUNT));
BUG_ON(len > PPC440SPE_ADMA_DMA_MAX_BYTE_COUNT);
spin_lock_bh(&ppc440spe_chan->lock);
......@@ -2397,7 +2397,7 @@ static struct dma_async_tx_descriptor *ppc440spe_adma_prep_dma_xor(
dma_dest, dma_src, src_cnt));
if (unlikely(!len))
return NULL;
BUG_ON(unlikely(len > PPC440SPE_ADMA_XOR_MAX_BYTE_COUNT));
BUG_ON(len > PPC440SPE_ADMA_XOR_MAX_BYTE_COUNT);
dev_dbg(ppc440spe_chan->device->common.dev,
"ppc440spe adma%d: %s src_cnt: %d len: %u int_en: %d\n",
......@@ -2887,7 +2887,7 @@ static struct dma_async_tx_descriptor *ppc440spe_adma_prep_dma_pq(
ADMA_LL_DBG(prep_dma_pq_dbg(ppc440spe_chan->device->id,
dst, src, src_cnt));
BUG_ON(!len);
BUG_ON(unlikely(len > PPC440SPE_ADMA_XOR_MAX_BYTE_COUNT));
BUG_ON(len > PPC440SPE_ADMA_XOR_MAX_BYTE_COUNT);
BUG_ON(!src_cnt);
if (src_cnt == 1 && dst[1] == src[0]) {
......
......@@ -1829,7 +1829,7 @@ d40_get_dev_addr(struct d40_chan *chan, enum dma_data_direction direction)
{
struct stedma40_platform_data *plat = chan->base->plat_data;
struct stedma40_chan_cfg *cfg = &chan->dma_cfg;
dma_addr_t addr;
dma_addr_t addr = 0;
if (chan->runtime_addr)
return chan->runtime_addr;
......@@ -2962,4 +2962,4 @@ static int __init stedma40_init(void)
{
return platform_driver_probe(&d40_driver, d40_probe);
}
arch_initcall(stedma40_init);
subsys_initcall(stedma40_init);
......@@ -3,6 +3,7 @@
* AVR32 systems.)
*
* Copyright (C) 2007 Atmel Corporation
* Copyright (C) 2010-2011 ST Microelectronics
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment