Commit 97a229f9 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'dmaengine-4.11-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "This time we fairly boring and bit small update.

   - Support for Intel iDMA 32-bit hardware
   - deprecate broken support for channel switching in async_tx
   - bunch of updates on stm32-dma
   - Cyclic support for zx dma and making in generic zx dma driver
   - Small updates to bunch of other drivers"

* tag 'dmaengine-4.11-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (29 commits)
  async_tx: deprecate broken support for channel switching
  dmaengine: rcar-dmac: Widen DMA mask to 40 bits
  dmaengine: sun6i: allow build on ARM64 platforms (sun50i)
  dmaengine: Provide a wrapper for memcpy operations
  dmaengine: zx: fix build warning
  dmaengine: dw: we do support Merrifield SoC in PCI mode
  dmaengine: dw: add support of iDMA 32-bit hardware
  dmaengine: dw: introduce register mappings for iDMA 32-bit
  dmaengine: dw: introduce block2bytes() and bytes2block()
  dmaengine: dw: extract dwc_chan_pause() for future use
  dmaengine: dw: replace convert_burst() with one liner
  dmaengine: dw: register IRQ and DMA pool with instance ID
  dmaengine: dw: Fix data corruption in large device to memory transfers
  dmaengine: ste_dma40: indicate granularity on channels
  dmaengine: ste_dma40: indicate directions on channels
  dmaengine: stm32-dma: Add error messages if xlate fails
  dmaengine: dw: pci: remove LPE Audio DMA ID
  dmaengine: stm32-dma: Add max_burst support
  dmaengine: stm32-dma: Add synchronization support
  dmaengine: stm32-dma: Fix residue computation issue in cyclic mode
  ...
parents ff58d005 1ad65115
...@@ -2,7 +2,7 @@ What: /sys/devices/platform/hidma-*/chid ...@@ -2,7 +2,7 @@ What: /sys/devices/platform/hidma-*/chid
/sys/devices/platform/QCOM8061:*/chid /sys/devices/platform/QCOM8061:*/chid
Date: Dec 2015 Date: Dec 2015
KernelVersion: 4.4 KernelVersion: 4.4
Contact: "Sinan Kaya <okaya@cudeaurora.org>" Contact: "Sinan Kaya <okaya@codeaurora.org>"
Description: Description:
Contains the ID of the channel within the HIDMA instance. Contains the ID of the channel within the HIDMA instance.
It is used to associate a given HIDMA channel with the It is used to associate a given HIDMA channel with the
......
...@@ -2,7 +2,7 @@ What: /sys/devices/platform/hidma-mgmt*/chanops/chan*/priority ...@@ -2,7 +2,7 @@ What: /sys/devices/platform/hidma-mgmt*/chanops/chan*/priority
/sys/devices/platform/QCOM8060:*/chanops/chan*/priority /sys/devices/platform/QCOM8060:*/chanops/chan*/priority
Date: Nov 2015 Date: Nov 2015
KernelVersion: 4.4 KernelVersion: 4.4
Contact: "Sinan Kaya <okaya@cudeaurora.org>" Contact: "Sinan Kaya <okaya@codeaurora.org>"
Description: Description:
Contains either 0 or 1 and indicates if the DMA channel is a Contains either 0 or 1 and indicates if the DMA channel is a
low priority (0) or high priority (1) channel. low priority (0) or high priority (1) channel.
...@@ -11,7 +11,7 @@ What: /sys/devices/platform/hidma-mgmt*/chanops/chan*/weight ...@@ -11,7 +11,7 @@ What: /sys/devices/platform/hidma-mgmt*/chanops/chan*/weight
/sys/devices/platform/QCOM8060:*/chanops/chan*/weight /sys/devices/platform/QCOM8060:*/chanops/chan*/weight
Date: Nov 2015 Date: Nov 2015
KernelVersion: 4.4 KernelVersion: 4.4
Contact: "Sinan Kaya <okaya@cudeaurora.org>" Contact: "Sinan Kaya <okaya@codeaurora.org>"
Description: Description:
Contains 0..15 and indicates the weight of the channel among Contains 0..15 and indicates the weight of the channel among
equal priority channels during round robin scheduling. equal priority channels during round robin scheduling.
...@@ -20,7 +20,7 @@ What: /sys/devices/platform/hidma-mgmt*/chreset_timeout_cycles ...@@ -20,7 +20,7 @@ What: /sys/devices/platform/hidma-mgmt*/chreset_timeout_cycles
/sys/devices/platform/QCOM8060:*/chreset_timeout_cycles /sys/devices/platform/QCOM8060:*/chreset_timeout_cycles
Date: Nov 2015 Date: Nov 2015
KernelVersion: 4.4 KernelVersion: 4.4
Contact: "Sinan Kaya <okaya@cudeaurora.org>" Contact: "Sinan Kaya <okaya@codeaurora.org>"
Description: Description:
Contains the platform specific cycle value to wait after a Contains the platform specific cycle value to wait after a
reset command is issued. If the value is chosen too short, reset command is issued. If the value is chosen too short,
...@@ -32,7 +32,7 @@ What: /sys/devices/platform/hidma-mgmt*/dma_channels ...@@ -32,7 +32,7 @@ What: /sys/devices/platform/hidma-mgmt*/dma_channels
/sys/devices/platform/QCOM8060:*/dma_channels /sys/devices/platform/QCOM8060:*/dma_channels
Date: Nov 2015 Date: Nov 2015
KernelVersion: 4.4 KernelVersion: 4.4
Contact: "Sinan Kaya <okaya@cudeaurora.org>" Contact: "Sinan Kaya <okaya@codeaurora.org>"
Description: Description:
Contains the number of dma channels supported by one instance Contains the number of dma channels supported by one instance
of HIDMA hardware. The value may change from chip to chip. of HIDMA hardware. The value may change from chip to chip.
...@@ -41,7 +41,7 @@ What: /sys/devices/platform/hidma-mgmt*/hw_version_major ...@@ -41,7 +41,7 @@ What: /sys/devices/platform/hidma-mgmt*/hw_version_major
/sys/devices/platform/QCOM8060:*/hw_version_major /sys/devices/platform/QCOM8060:*/hw_version_major
Date: Nov 2015 Date: Nov 2015
KernelVersion: 4.4 KernelVersion: 4.4
Contact: "Sinan Kaya <okaya@cudeaurora.org>" Contact: "Sinan Kaya <okaya@codeaurora.org>"
Description: Description:
Version number major for the hardware. Version number major for the hardware.
...@@ -49,7 +49,7 @@ What: /sys/devices/platform/hidma-mgmt*/hw_version_minor ...@@ -49,7 +49,7 @@ What: /sys/devices/platform/hidma-mgmt*/hw_version_minor
/sys/devices/platform/QCOM8060:*/hw_version_minor /sys/devices/platform/QCOM8060:*/hw_version_minor
Date: Nov 2015 Date: Nov 2015
KernelVersion: 4.4 KernelVersion: 4.4
Contact: "Sinan Kaya <okaya@cudeaurora.org>" Contact: "Sinan Kaya <okaya@codeaurora.org>"
Description: Description:
Version number minor for the hardware. Version number minor for the hardware.
...@@ -57,7 +57,7 @@ What: /sys/devices/platform/hidma-mgmt*/max_rd_xactions ...@@ -57,7 +57,7 @@ What: /sys/devices/platform/hidma-mgmt*/max_rd_xactions
/sys/devices/platform/QCOM8060:*/max_rd_xactions /sys/devices/platform/QCOM8060:*/max_rd_xactions
Date: Nov 2015 Date: Nov 2015
KernelVersion: 4.4 KernelVersion: 4.4
Contact: "Sinan Kaya <okaya@cudeaurora.org>" Contact: "Sinan Kaya <okaya@codeaurora.org>"
Description: Description:
Contains a value between 0 and 31. Maximum number of Contains a value between 0 and 31. Maximum number of
read transactions that can be issued back to back. read transactions that can be issued back to back.
...@@ -69,7 +69,7 @@ What: /sys/devices/platform/hidma-mgmt*/max_read_request ...@@ -69,7 +69,7 @@ What: /sys/devices/platform/hidma-mgmt*/max_read_request
/sys/devices/platform/QCOM8060:*/max_read_request /sys/devices/platform/QCOM8060:*/max_read_request
Date: Nov 2015 Date: Nov 2015
KernelVersion: 4.4 KernelVersion: 4.4
Contact: "Sinan Kaya <okaya@cudeaurora.org>" Contact: "Sinan Kaya <okaya@codeaurora.org>"
Description: Description:
Size of each read request. The value needs to be a power Size of each read request. The value needs to be a power
of two and can be between 128 and 1024. of two and can be between 128 and 1024.
...@@ -78,7 +78,7 @@ What: /sys/devices/platform/hidma-mgmt*/max_wr_xactions ...@@ -78,7 +78,7 @@ What: /sys/devices/platform/hidma-mgmt*/max_wr_xactions
/sys/devices/platform/QCOM8060:*/max_wr_xactions /sys/devices/platform/QCOM8060:*/max_wr_xactions
Date: Nov 2015 Date: Nov 2015
KernelVersion: 4.4 KernelVersion: 4.4
Contact: "Sinan Kaya <okaya@cudeaurora.org>" Contact: "Sinan Kaya <okaya@codeaurora.org>"
Description: Description:
Contains a value between 0 and 31. Maximum number of Contains a value between 0 and 31. Maximum number of
write transactions that can be issued back to back. write transactions that can be issued back to back.
...@@ -91,7 +91,7 @@ What: /sys/devices/platform/hidma-mgmt*/max_write_request ...@@ -91,7 +91,7 @@ What: /sys/devices/platform/hidma-mgmt*/max_write_request
/sys/devices/platform/QCOM8060:*/max_write_request /sys/devices/platform/QCOM8060:*/max_write_request
Date: Nov 2015 Date: Nov 2015
KernelVersion: 4.4 KernelVersion: 4.4
Contact: "Sinan Kaya <okaya@cudeaurora.org>" Contact: "Sinan Kaya <okaya@codeaurora.org>"
Description: Description:
Size of each write request. The value needs to be a power Size of each write request. The value needs to be a power
of two and can be between 128 and 1024. of two and can be between 128 and 1024.
...@@ -40,8 +40,7 @@ Example: ...@@ -40,8 +40,7 @@ Example:
DMA clients connected to the STM32 DMA controller must use the format DMA clients connected to the STM32 DMA controller must use the format
described in the dma.txt file, using a five-cell specifier for each described in the dma.txt file, using a five-cell specifier for each
channel: a phandle plus four integer cells. channel: a phandle to the DMA controller plus the following four integer cells:
The four cells in order are:
1. The channel id 1. The channel id
2. The request line number 2. The request line number
...@@ -61,7 +60,7 @@ The four cells in order are: ...@@ -61,7 +60,7 @@ The four cells in order are:
0x1: medium 0x1: medium
0x2: high 0x2: high
0x3: very high 0x3: very high
5. A 32bit mask specifying the DMA FIFO threshold configuration which are device 4. A 32bit mask specifying the DMA FIFO threshold configuration which are device
dependent: dependent:
-bit 0-1: Fifo threshold -bit 0-1: Fifo threshold
0x0: 1/4 full FIFO 0x0: 1/4 full FIFO
......
...@@ -157,7 +157,7 @@ config DMA_SUN4I ...@@ -157,7 +157,7 @@ config DMA_SUN4I
config DMA_SUN6I config DMA_SUN6I
tristate "Allwinner A31 SoCs DMA support" tristate "Allwinner A31 SoCs DMA support"
depends on MACH_SUN6I || MACH_SUN8I || COMPILE_TEST depends on MACH_SUN6I || MACH_SUN8I || (ARM64 && ARCH_SUNXI) || COMPILE_TEST
depends on RESET_CONTROLLER depends on RESET_CONTROLLER
select DMA_ENGINE select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS select DMA_VIRTUAL_CHANNELS
...@@ -458,7 +458,7 @@ config STM32_DMA ...@@ -458,7 +458,7 @@ config STM32_DMA
help help
Enable support for the on-chip DMA controller on STMicroelectronics Enable support for the on-chip DMA controller on STMicroelectronics
STM32 MCUs. STM32 MCUs.
If you have a board based on such a MCU and wish to use DMA say Y or M If you have a board based on such a MCU and wish to use DMA say Y
here. here.
config S3C24XX_DMAC config S3C24XX_DMAC
...@@ -571,12 +571,12 @@ config XILINX_ZYNQMP_DMA ...@@ -571,12 +571,12 @@ config XILINX_ZYNQMP_DMA
Enable support for Xilinx ZynqMP DMA controller. Enable support for Xilinx ZynqMP DMA controller.
config ZX_DMA config ZX_DMA
tristate "ZTE ZX296702 DMA support" tristate "ZTE ZX DMA support"
depends on ARCH_ZX || COMPILE_TEST depends on ARCH_ZX || COMPILE_TEST
select DMA_ENGINE select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS select DMA_VIRTUAL_CHANNELS
help help
Support the DMA engine for ZTE ZX296702 platform devices. Support the DMA engine for ZTE ZX family platform devices.
# driver files # driver files
......
...@@ -66,7 +66,7 @@ obj-$(CONFIG_TI_CPPI41) += cppi41.o ...@@ -66,7 +66,7 @@ obj-$(CONFIG_TI_CPPI41) += cppi41.o
obj-$(CONFIG_TI_DMA_CROSSBAR) += ti-dma-crossbar.o obj-$(CONFIG_TI_DMA_CROSSBAR) += ti-dma-crossbar.o
obj-$(CONFIG_TI_EDMA) += edma.o obj-$(CONFIG_TI_EDMA) += edma.o
obj-$(CONFIG_XGENE_DMA) += xgene-dma.o obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
obj-$(CONFIG_ZX_DMA) += zx296702_dma.o obj-$(CONFIG_ZX_DMA) += zx_dma.o
obj-$(CONFIG_ST_FDMA) += st_fdma.o obj-$(CONFIG_ST_FDMA) += st_fdma.o
obj-y += qcom/ obj-y += qcom/
......
...@@ -65,7 +65,7 @@ ...@@ -65,7 +65,7 @@
#include <linux/mempool.h> #include <linux/mempool.h>
static DEFINE_MUTEX(dma_list_mutex); static DEFINE_MUTEX(dma_list_mutex);
static DEFINE_IDR(dma_idr); static DEFINE_IDA(dma_ida);
static LIST_HEAD(dma_device_list); static LIST_HEAD(dma_device_list);
static long dmaengine_ref_count; static long dmaengine_ref_count;
...@@ -162,7 +162,7 @@ static void chan_dev_release(struct device *dev) ...@@ -162,7 +162,7 @@ static void chan_dev_release(struct device *dev)
chan_dev = container_of(dev, typeof(*chan_dev), device); chan_dev = container_of(dev, typeof(*chan_dev), device);
if (atomic_dec_and_test(chan_dev->idr_ref)) { if (atomic_dec_and_test(chan_dev->idr_ref)) {
mutex_lock(&dma_list_mutex); mutex_lock(&dma_list_mutex);
idr_remove(&dma_idr, chan_dev->dev_id); ida_remove(&dma_ida, chan_dev->dev_id);
mutex_unlock(&dma_list_mutex); mutex_unlock(&dma_list_mutex);
kfree(chan_dev->idr_ref); kfree(chan_dev->idr_ref);
} }
...@@ -898,14 +898,15 @@ static int get_dma_id(struct dma_device *device) ...@@ -898,14 +898,15 @@ static int get_dma_id(struct dma_device *device)
{ {
int rc; int rc;
mutex_lock(&dma_list_mutex); do {
if (!ida_pre_get(&dma_ida, GFP_KERNEL))
rc = idr_alloc(&dma_idr, NULL, 0, 0, GFP_KERNEL); return -ENOMEM;
if (rc >= 0) mutex_lock(&dma_list_mutex);
device->dev_id = rc; rc = ida_get_new(&dma_ida, &device->dev_id);
mutex_unlock(&dma_list_mutex);
} while (rc == -EAGAIN);
mutex_unlock(&dma_list_mutex); return rc;
return rc < 0 ? rc : 0;
} }
/** /**
...@@ -1035,7 +1036,7 @@ int dma_async_device_register(struct dma_device *device) ...@@ -1035,7 +1036,7 @@ int dma_async_device_register(struct dma_device *device)
/* if we never registered a channel just release the idr */ /* if we never registered a channel just release the idr */
if (atomic_read(idr_ref) == 0) { if (atomic_read(idr_ref) == 0) {
mutex_lock(&dma_list_mutex); mutex_lock(&dma_list_mutex);
idr_remove(&dma_idr, device->dev_id); ida_remove(&dma_ida, device->dev_id);
mutex_unlock(&dma_list_mutex); mutex_unlock(&dma_list_mutex);
kfree(idr_ref); kfree(idr_ref);
return rc; return rc;
......
This diff is collapsed.
...@@ -15,6 +15,18 @@ ...@@ -15,6 +15,18 @@
#include "internal.h" #include "internal.h"
static struct dw_dma_platform_data mrfld_pdata = {
.nr_channels = 8,
.is_private = true,
.is_memcpy = true,
.is_idma32 = true,
.chan_allocation_order = CHAN_ALLOCATION_ASCENDING,
.chan_priority = CHAN_PRIORITY_ASCENDING,
.block_size = 131071,
.nr_masters = 1,
.data_width = {4},
};
static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid) static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid)
{ {
const struct dw_dma_platform_data *pdata = (void *)pid->driver_data; const struct dw_dma_platform_data *pdata = (void *)pid->driver_data;
...@@ -47,6 +59,7 @@ static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid) ...@@ -47,6 +59,7 @@ static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid)
return -ENOMEM; return -ENOMEM;
chip->dev = &pdev->dev; chip->dev = &pdev->dev;
chip->id = pdev->devfn;
chip->regs = pcim_iomap_table(pdev)[0]; chip->regs = pcim_iomap_table(pdev)[0];
chip->irq = pdev->irq; chip->irq = pdev->irq;
chip->pdata = pdata; chip->pdata = pdata;
...@@ -95,14 +108,16 @@ static const struct dev_pm_ops dw_pci_dev_pm_ops = { ...@@ -95,14 +108,16 @@ static const struct dev_pm_ops dw_pci_dev_pm_ops = {
}; };
static const struct pci_device_id dw_pci_id_table[] = { static const struct pci_device_id dw_pci_id_table[] = {
/* Medfield */ /* Medfield (GPDMA) */
{ PCI_VDEVICE(INTEL, 0x0827) }, { PCI_VDEVICE(INTEL, 0x0827) },
{ PCI_VDEVICE(INTEL, 0x0830) },
/* BayTrail */ /* BayTrail */
{ PCI_VDEVICE(INTEL, 0x0f06) }, { PCI_VDEVICE(INTEL, 0x0f06) },
{ PCI_VDEVICE(INTEL, 0x0f40) }, { PCI_VDEVICE(INTEL, 0x0f40) },
/* Merrifield iDMA 32-bit (GPDMA) */
{ PCI_VDEVICE(INTEL, 0x11a2), (kernel_ulong_t)&mrfld_pdata },
/* Braswell */ /* Braswell */
{ PCI_VDEVICE(INTEL, 0x2286) }, { PCI_VDEVICE(INTEL, 0x2286) },
{ PCI_VDEVICE(INTEL, 0x22c0) }, { PCI_VDEVICE(INTEL, 0x22c0) },
......
...@@ -202,6 +202,7 @@ static int dw_probe(struct platform_device *pdev) ...@@ -202,6 +202,7 @@ static int dw_probe(struct platform_device *pdev)
pdata = dw_dma_parse_dt(pdev); pdata = dw_dma_parse_dt(pdev);
chip->dev = dev; chip->dev = dev;
chip->id = pdev->id;
chip->pdata = pdata; chip->pdata = pdata;
chip->clk = devm_clk_get(chip->dev, "hclk"); chip->clk = devm_clk_get(chip->dev, "hclk");
......
...@@ -3,15 +3,19 @@ ...@@ -3,15 +3,19 @@
* *
* Copyright (C) 2005-2007 Atmel Corporation * Copyright (C) 2005-2007 Atmel Corporation
* Copyright (C) 2010-2011 ST Microelectronics * Copyright (C) 2010-2011 ST Microelectronics
* Copyright (C) 2016 Intel Corporation
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as * it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation. * published by the Free Software Foundation.
*/ */
#include <linux/bitops.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/dmaengine.h> #include <linux/dmaengine.h>
#include <linux/io-64-nonatomic-hi-lo.h>
#include "internal.h" #include "internal.h"
#define DW_DMA_MAX_NR_REQUESTS 16 #define DW_DMA_MAX_NR_REQUESTS 16
...@@ -85,9 +89,9 @@ struct dw_dma_regs { ...@@ -85,9 +89,9 @@ struct dw_dma_regs {
DW_REG(ID); DW_REG(ID);
DW_REG(TEST); DW_REG(TEST);
/* reserved */ /* iDMA 32-bit support */
DW_REG(__reserved0); DW_REG(CLASS_PRIORITY0);
DW_REG(__reserved1); DW_REG(CLASS_PRIORITY1);
/* optional encoded params, 0x3c8..0x3f7 */ /* optional encoded params, 0x3c8..0x3f7 */
u32 __reserved; u32 __reserved;
...@@ -99,6 +103,17 @@ struct dw_dma_regs { ...@@ -99,6 +103,17 @@ struct dw_dma_regs {
/* top-level parameters */ /* top-level parameters */
u32 DW_PARAMS; u32 DW_PARAMS;
/* component ID */
u32 COMP_TYPE;
u32 COMP_VERSION;
/* iDMA 32-bit support */
DW_REG(FIFO_PARTITION0);
DW_REG(FIFO_PARTITION1);
DW_REG(SAI_ERR);
DW_REG(GLOBAL_CFG);
}; };
/* /*
...@@ -170,8 +185,9 @@ enum dw_dma_msize { ...@@ -170,8 +185,9 @@ enum dw_dma_msize {
#define DWC_CTLL_LLP_S_EN (1 << 28) /* src block chain */ #define DWC_CTLL_LLP_S_EN (1 << 28) /* src block chain */
/* Bitfields in CTL_HI */ /* Bitfields in CTL_HI */
#define DWC_CTLH_DONE 0x00001000 #define DWC_CTLH_BLOCK_TS_MASK GENMASK(11, 0)
#define DWC_CTLH_BLOCK_TS_MASK 0x00000fff #define DWC_CTLH_BLOCK_TS(x) ((x) & DWC_CTLH_BLOCK_TS_MASK)
#define DWC_CTLH_DONE (1 << 12)
/* Bitfields in CFG_LO */ /* Bitfields in CFG_LO */
#define DWC_CFGL_CH_PRIOR_MASK (0x7 << 5) /* priority mask */ #define DWC_CFGL_CH_PRIOR_MASK (0x7 << 5) /* priority mask */
...@@ -214,6 +230,33 @@ enum dw_dma_msize { ...@@ -214,6 +230,33 @@ enum dw_dma_msize {
/* Bitfields in CFG */ /* Bitfields in CFG */
#define DW_CFG_DMA_EN (1 << 0) #define DW_CFG_DMA_EN (1 << 0)
/* iDMA 32-bit support */
/* Bitfields in CTL_HI */
#define IDMA32C_CTLH_BLOCK_TS_MASK GENMASK(16, 0)
#define IDMA32C_CTLH_BLOCK_TS(x) ((x) & IDMA32C_CTLH_BLOCK_TS_MASK)
#define IDMA32C_CTLH_DONE (1 << 17)
/* Bitfields in CFG_LO */
#define IDMA32C_CFGL_DST_BURST_ALIGN (1 << 0) /* dst burst align */
#define IDMA32C_CFGL_SRC_BURST_ALIGN (1 << 1) /* src burst align */
#define IDMA32C_CFGL_CH_DRAIN (1 << 10) /* drain FIFO */
#define IDMA32C_CFGL_DST_OPT_BL (1 << 20) /* optimize dst burst length */
#define IDMA32C_CFGL_SRC_OPT_BL (1 << 21) /* optimize src burst length */
/* Bitfields in CFG_HI */
#define IDMA32C_CFGH_SRC_PER(x) ((x) << 0)
#define IDMA32C_CFGH_DST_PER(x) ((x) << 4)
#define IDMA32C_CFGH_RD_ISSUE_THD(x) ((x) << 8)
#define IDMA32C_CFGH_RW_ISSUE_THD(x) ((x) << 18)
#define IDMA32C_CFGH_SRC_PER_EXT(x) ((x) << 28) /* src peripheral extension */
#define IDMA32C_CFGH_DST_PER_EXT(x) ((x) << 30) /* dst peripheral extension */
/* Bitfields in FIFO_PARTITION */
#define IDMA32C_FP_PSIZE_CH0(x) ((x) << 0)
#define IDMA32C_FP_PSIZE_CH1(x) ((x) << 13)
#define IDMA32C_FP_UPDATE (1 << 26)
enum dw_dmac_flags { enum dw_dmac_flags {
DW_DMA_IS_CYCLIC = 0, DW_DMA_IS_CYCLIC = 0,
DW_DMA_IS_SOFT_LLP = 1, DW_DMA_IS_SOFT_LLP = 1,
...@@ -270,6 +313,7 @@ static inline struct dw_dma_chan *to_dw_dma_chan(struct dma_chan *chan) ...@@ -270,6 +313,7 @@ static inline struct dw_dma_chan *to_dw_dma_chan(struct dma_chan *chan)
struct dw_dma { struct dw_dma {
struct dma_device dma; struct dma_device dma;
char name[20];
void __iomem *regs; void __iomem *regs;
struct dma_pool *desc_pool; struct dma_pool *desc_pool;
struct tasklet_struct tasklet; struct tasklet_struct tasklet;
...@@ -293,6 +337,11 @@ static inline struct dw_dma_regs __iomem *__dw_regs(struct dw_dma *dw) ...@@ -293,6 +337,11 @@ static inline struct dw_dma_regs __iomem *__dw_regs(struct dw_dma *dw)
#define dma_writel(dw, name, val) \ #define dma_writel(dw, name, val) \
dma_writel_native((val), &(__dw_regs(dw)->name)) dma_writel_native((val), &(__dw_regs(dw)->name))
#define idma32_readq(dw, name) \
hi_lo_readq(&(__dw_regs(dw)->name))
#define idma32_writeq(dw, name, val) \
hi_lo_writeq((val), &(__dw_regs(dw)->name))
#define channel_set_bit(dw, reg, mask) \ #define channel_set_bit(dw, reg, mask) \
dma_writel(dw, reg, ((mask) << 8) | (mask)) dma_writel(dw, reg, ((mask) << 8) | (mask))
#define channel_clear_bit(dw, reg, mask) \ #define channel_clear_bit(dw, reg, mask) \
......
...@@ -272,7 +272,7 @@ static void ipu_irq_handler(struct irq_desc *desc) ...@@ -272,7 +272,7 @@ static void ipu_irq_handler(struct irq_desc *desc)
u32 status; u32 status;
int i, line; int i, line;
for (i = IPU_IRQ_NR_FN_BANKS; i < IPU_IRQ_NR_BANKS; i++) { for (i = 0; i < IPU_IRQ_NR_BANKS; i++) {
struct ipu_irq_bank *bank = irq_bank + i; struct ipu_irq_bank *bank = irq_bank + i;
raw_spin_lock(&bank_lock); raw_spin_lock(&bank_lock);
......
...@@ -1724,6 +1724,7 @@ static int rcar_dmac_probe(struct platform_device *pdev) ...@@ -1724,6 +1724,7 @@ static int rcar_dmac_probe(struct platform_device *pdev)
dmac->dev = &pdev->dev; dmac->dev = &pdev->dev;
platform_set_drvdata(pdev, dmac); platform_set_drvdata(pdev, dmac);
dma_set_mask_and_coherent(dmac->dev, DMA_BIT_MASK(40));
ret = rcar_dmac_parse_of(&pdev->dev, dmac); ret = rcar_dmac_parse_of(&pdev->dev, dmac);
if (ret < 0) if (ret < 0)
......
...@@ -2809,12 +2809,14 @@ static void __init d40_chan_init(struct d40_base *base, struct dma_device *dma, ...@@ -2809,12 +2809,14 @@ static void __init d40_chan_init(struct d40_base *base, struct dma_device *dma,
static void d40_ops_init(struct d40_base *base, struct dma_device *dev) static void d40_ops_init(struct d40_base *base, struct dma_device *dev)
{ {
if (dma_has_cap(DMA_SLAVE, dev->cap_mask)) if (dma_has_cap(DMA_SLAVE, dev->cap_mask)) {
dev->device_prep_slave_sg = d40_prep_slave_sg; dev->device_prep_slave_sg = d40_prep_slave_sg;
dev->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
}
if (dma_has_cap(DMA_MEMCPY, dev->cap_mask)) { if (dma_has_cap(DMA_MEMCPY, dev->cap_mask)) {
dev->device_prep_dma_memcpy = d40_prep_memcpy; dev->device_prep_dma_memcpy = d40_prep_memcpy;
dev->directions = BIT(DMA_MEM_TO_MEM);
/* /*
* This controller can only access address at even * This controller can only access address at even
* 32bit boundaries, i.e. 2^2 * 32bit boundaries, i.e. 2^2
...@@ -2836,6 +2838,7 @@ static void d40_ops_init(struct d40_base *base, struct dma_device *dev) ...@@ -2836,6 +2838,7 @@ static void d40_ops_init(struct d40_base *base, struct dma_device *dev)
dev->device_pause = d40_pause; dev->device_pause = d40_pause;
dev->device_resume = d40_resume; dev->device_resume = d40_resume;
dev->device_terminate_all = d40_terminate_all; dev->device_terminate_all = d40_terminate_all;
dev->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
dev->dev = base->dev; dev->dev = base->dev;
} }
......
...@@ -114,6 +114,7 @@ ...@@ -114,6 +114,7 @@
#define STM32_DMA_MAX_CHANNELS 0x08 #define STM32_DMA_MAX_CHANNELS 0x08
#define STM32_DMA_MAX_REQUEST_ID 0x08 #define STM32_DMA_MAX_REQUEST_ID 0x08
#define STM32_DMA_MAX_DATA_PARAM 0x03 #define STM32_DMA_MAX_DATA_PARAM 0x03
#define STM32_DMA_MAX_BURST 16
enum stm32_dma_width { enum stm32_dma_width {
STM32_DMA_BYTE, STM32_DMA_BYTE,
...@@ -403,6 +404,13 @@ static int stm32_dma_terminate_all(struct dma_chan *c) ...@@ -403,6 +404,13 @@ static int stm32_dma_terminate_all(struct dma_chan *c)
return 0; return 0;
} }
static void stm32_dma_synchronize(struct dma_chan *c)
{
struct stm32_dma_chan *chan = to_stm32_dma_chan(c);
vchan_synchronize(&chan->vchan);
}
static void stm32_dma_dump_reg(struct stm32_dma_chan *chan) static void stm32_dma_dump_reg(struct stm32_dma_chan *chan)
{ {
struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan);
...@@ -421,7 +429,7 @@ static void stm32_dma_dump_reg(struct stm32_dma_chan *chan) ...@@ -421,7 +429,7 @@ static void stm32_dma_dump_reg(struct stm32_dma_chan *chan)
dev_dbg(chan2dev(chan), "SFCR: 0x%08x\n", sfcr); dev_dbg(chan2dev(chan), "SFCR: 0x%08x\n", sfcr);
} }
static int stm32_dma_start_transfer(struct stm32_dma_chan *chan) static void stm32_dma_start_transfer(struct stm32_dma_chan *chan)
{ {
struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan);
struct virt_dma_desc *vdesc; struct virt_dma_desc *vdesc;
...@@ -432,12 +440,12 @@ static int stm32_dma_start_transfer(struct stm32_dma_chan *chan) ...@@ -432,12 +440,12 @@ static int stm32_dma_start_transfer(struct stm32_dma_chan *chan)
ret = stm32_dma_disable_chan(chan); ret = stm32_dma_disable_chan(chan);
if (ret < 0) if (ret < 0)
return ret; return;
if (!chan->desc) { if (!chan->desc) {
vdesc = vchan_next_desc(&chan->vchan); vdesc = vchan_next_desc(&chan->vchan);
if (!vdesc) if (!vdesc)
return -EPERM; return;
chan->desc = to_stm32_dma_desc(vdesc); chan->desc = to_stm32_dma_desc(vdesc);
chan->next_sg = 0; chan->next_sg = 0;
...@@ -471,7 +479,7 @@ static int stm32_dma_start_transfer(struct stm32_dma_chan *chan) ...@@ -471,7 +479,7 @@ static int stm32_dma_start_transfer(struct stm32_dma_chan *chan)
chan->busy = true; chan->busy = true;
return 0; dev_dbg(chan2dev(chan), "vchan %p: started\n", &chan->vchan);
} }
static void stm32_dma_configure_next_sg(struct stm32_dma_chan *chan) static void stm32_dma_configure_next_sg(struct stm32_dma_chan *chan)
...@@ -500,8 +508,6 @@ static void stm32_dma_configure_next_sg(struct stm32_dma_chan *chan) ...@@ -500,8 +508,6 @@ static void stm32_dma_configure_next_sg(struct stm32_dma_chan *chan)
dev_dbg(chan2dev(chan), "CT=0 <=> SM1AR: 0x%08x\n", dev_dbg(chan2dev(chan), "CT=0 <=> SM1AR: 0x%08x\n",
stm32_dma_read(dmadev, STM32_DMA_SM1AR(id))); stm32_dma_read(dmadev, STM32_DMA_SM1AR(id)));
} }
chan->next_sg++;
} }
} }
...@@ -510,6 +516,7 @@ static void stm32_dma_handle_chan_done(struct stm32_dma_chan *chan) ...@@ -510,6 +516,7 @@ static void stm32_dma_handle_chan_done(struct stm32_dma_chan *chan)
if (chan->desc) { if (chan->desc) {
if (chan->desc->cyclic) { if (chan->desc->cyclic) {
vchan_cyclic_callback(&chan->desc->vdesc); vchan_cyclic_callback(&chan->desc->vdesc);
chan->next_sg++;
stm32_dma_configure_next_sg(chan); stm32_dma_configure_next_sg(chan);
} else { } else {
chan->busy = false; chan->busy = false;
...@@ -552,15 +559,13 @@ static void stm32_dma_issue_pending(struct dma_chan *c) ...@@ -552,15 +559,13 @@ static void stm32_dma_issue_pending(struct dma_chan *c)
{ {
struct stm32_dma_chan *chan = to_stm32_dma_chan(c); struct stm32_dma_chan *chan = to_stm32_dma_chan(c);
unsigned long flags; unsigned long flags;
int ret;
spin_lock_irqsave(&chan->vchan.lock, flags); spin_lock_irqsave(&chan->vchan.lock, flags);
if (!chan->busy) { if (vchan_issue_pending(&chan->vchan) && !chan->desc && !chan->busy) {
if (vchan_issue_pending(&chan->vchan) && !chan->desc) { dev_dbg(chan2dev(chan), "vchan %p: issued\n", &chan->vchan);
ret = stm32_dma_start_transfer(chan); stm32_dma_start_transfer(chan);
if ((!ret) && (chan->desc->cyclic)) if (chan->desc->cyclic)
stm32_dma_configure_next_sg(chan); stm32_dma_configure_next_sg(chan);
}
} }
spin_unlock_irqrestore(&chan->vchan.lock, flags); spin_unlock_irqrestore(&chan->vchan.lock, flags);
} }
...@@ -848,26 +853,40 @@ static struct dma_async_tx_descriptor *stm32_dma_prep_dma_memcpy( ...@@ -848,26 +853,40 @@ static struct dma_async_tx_descriptor *stm32_dma_prep_dma_memcpy(
return vchan_tx_prep(&chan->vchan, &desc->vdesc, flags); return vchan_tx_prep(&chan->vchan, &desc->vdesc, flags);
} }
static u32 stm32_dma_get_remaining_bytes(struct stm32_dma_chan *chan)
{
u32 dma_scr, width, ndtr;
struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan);
dma_scr = stm32_dma_read(dmadev, STM32_DMA_SCR(chan->id));
width = STM32_DMA_SCR_PSIZE_GET(dma_scr);
ndtr = stm32_dma_read(dmadev, STM32_DMA_SNDTR(chan->id));
return ndtr << width;
}
static size_t stm32_dma_desc_residue(struct stm32_dma_chan *chan, static size_t stm32_dma_desc_residue(struct stm32_dma_chan *chan,
struct stm32_dma_desc *desc, struct stm32_dma_desc *desc,
u32 next_sg) u32 next_sg)
{ {
struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); u32 residue = 0;
u32 dma_scr, width, residue, count;
int i; int i;
residue = 0; /*
* In cyclic mode, for the last period, residue = remaining bytes from
* NDTR
*/
if (chan->desc->cyclic && next_sg == 0)
return stm32_dma_get_remaining_bytes(chan);
/*
* For all other periods in cyclic mode, and in sg mode,
* residue = remaining bytes from NDTR + remaining periods/sg to be
* transferred
*/
for (i = next_sg; i < desc->num_sgs; i++) for (i = next_sg; i < desc->num_sgs; i++)
residue += desc->sg_req[i].len; residue += desc->sg_req[i].len;
residue += stm32_dma_get_remaining_bytes(chan);
if (next_sg != 0) {
dma_scr = stm32_dma_read(dmadev, STM32_DMA_SCR(chan->id));
width = STM32_DMA_SCR_PSIZE_GET(dma_scr);
count = stm32_dma_read(dmadev, STM32_DMA_SNDTR(chan->id));
residue += count << width;
}
return residue; return residue;
} }
...@@ -964,27 +983,36 @@ static struct dma_chan *stm32_dma_of_xlate(struct of_phandle_args *dma_spec, ...@@ -964,27 +983,36 @@ static struct dma_chan *stm32_dma_of_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma) struct of_dma *ofdma)
{ {
struct stm32_dma_device *dmadev = ofdma->of_dma_data; struct stm32_dma_device *dmadev = ofdma->of_dma_data;
struct device *dev = dmadev->ddev.dev;
struct stm32_dma_cfg cfg; struct stm32_dma_cfg cfg;
struct stm32_dma_chan *chan; struct stm32_dma_chan *chan;
struct dma_chan *c; struct dma_chan *c;
if (dma_spec->args_count < 4) if (dma_spec->args_count < 4) {
dev_err(dev, "Bad number of cells\n");
return NULL; return NULL;
}
cfg.channel_id = dma_spec->args[0]; cfg.channel_id = dma_spec->args[0];
cfg.request_line = dma_spec->args[1]; cfg.request_line = dma_spec->args[1];
cfg.stream_config = dma_spec->args[2]; cfg.stream_config = dma_spec->args[2];
cfg.threshold = dma_spec->args[3]; cfg.threshold = dma_spec->args[3];
if ((cfg.channel_id >= STM32_DMA_MAX_CHANNELS) || (cfg.request_line >= if ((cfg.channel_id >= STM32_DMA_MAX_CHANNELS) ||
STM32_DMA_MAX_REQUEST_ID)) (cfg.request_line >= STM32_DMA_MAX_REQUEST_ID)) {
dev_err(dev, "Bad channel and/or request id\n");
return NULL; return NULL;
}
chan = &dmadev->chan[cfg.channel_id]; chan = &dmadev->chan[cfg.channel_id];
c = dma_get_slave_channel(&chan->vchan.chan); c = dma_get_slave_channel(&chan->vchan.chan);
if (c) if (!c) {
stm32_dma_set_config(chan, &cfg); dev_err(dev, "No more channel avalaible\n");
return NULL;
}
stm32_dma_set_config(chan, &cfg);
return c; return c;
} }
...@@ -1048,6 +1076,7 @@ static int stm32_dma_probe(struct platform_device *pdev) ...@@ -1048,6 +1076,7 @@ static int stm32_dma_probe(struct platform_device *pdev)
dd->device_prep_dma_cyclic = stm32_dma_prep_dma_cyclic; dd->device_prep_dma_cyclic = stm32_dma_prep_dma_cyclic;
dd->device_config = stm32_dma_slave_config; dd->device_config = stm32_dma_slave_config;
dd->device_terminate_all = stm32_dma_terminate_all; dd->device_terminate_all = stm32_dma_terminate_all;
dd->device_synchronize = stm32_dma_synchronize;
dd->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | dd->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
...@@ -1056,6 +1085,7 @@ static int stm32_dma_probe(struct platform_device *pdev) ...@@ -1056,6 +1085,7 @@ static int stm32_dma_probe(struct platform_device *pdev)
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
dd->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); dd->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
dd->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; dd->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
dd->max_burst = STM32_DMA_MAX_BURST;
dd->dev = &pdev->dev; dd->dev = &pdev->dev;
INIT_LIST_HEAD(&dd->channels); INIT_LIST_HEAD(&dd->channels);
......
...@@ -26,7 +26,7 @@ ...@@ -26,7 +26,7 @@
#define DRIVER_NAME "zx-dma" #define DRIVER_NAME "zx-dma"
#define DMA_ALIGN 4 #define DMA_ALIGN 4
#define DMA_MAX_SIZE (0x10000 - PAGE_SIZE) #define DMA_MAX_SIZE (0x10000 - 512)
#define LLI_BLOCK_SIZE (4 * PAGE_SIZE) #define LLI_BLOCK_SIZE (4 * PAGE_SIZE)
#define REG_ZX_SRC_ADDR 0x00 #define REG_ZX_SRC_ADDR 0x00
...@@ -365,7 +365,8 @@ static enum dma_status zx_dma_tx_status(struct dma_chan *chan, ...@@ -365,7 +365,8 @@ static enum dma_status zx_dma_tx_status(struct dma_chan *chan,
bytes = 0; bytes = 0;
clli = zx_dma_get_curr_lli(p); clli = zx_dma_get_curr_lli(p);
index = (clli - ds->desc_hw_lli) / sizeof(struct zx_desc_hw); index = (clli - ds->desc_hw_lli) /
sizeof(struct zx_desc_hw) + 1;
for (; index < ds->desc_num; index++) { for (; index < ds->desc_num; index++) {
bytes += ds->desc_hw[index].src_x; bytes += ds->desc_hw[index].src_x;
/* end of lli */ /* end of lli */
...@@ -812,6 +813,7 @@ static int zx_dma_probe(struct platform_device *op) ...@@ -812,6 +813,7 @@ static int zx_dma_probe(struct platform_device *op)
INIT_LIST_HEAD(&d->slave.channels); INIT_LIST_HEAD(&d->slave.channels);
dma_cap_set(DMA_SLAVE, d->slave.cap_mask); dma_cap_set(DMA_SLAVE, d->slave.cap_mask);
dma_cap_set(DMA_MEMCPY, d->slave.cap_mask); dma_cap_set(DMA_MEMCPY, d->slave.cap_mask);
dma_cap_set(DMA_CYCLIC, d->slave.cap_mask);
dma_cap_set(DMA_PRIVATE, d->slave.cap_mask); dma_cap_set(DMA_PRIVATE, d->slave.cap_mask);
d->slave.dev = &op->dev; d->slave.dev = &op->dev;
d->slave.device_free_chan_resources = zx_dma_free_chan_resources; d->slave.device_free_chan_resources = zx_dma_free_chan_resources;
......
...@@ -87,7 +87,7 @@ struct async_submit_ctl { ...@@ -87,7 +87,7 @@ struct async_submit_ctl {
void *scribble; void *scribble;
}; };
#ifdef CONFIG_DMA_ENGINE #if defined(CONFIG_DMA_ENGINE) && !defined(CONFIG_ASYNC_TX_CHANNEL_SWITCH)
#define async_tx_issue_pending_all dma_issue_pending_all #define async_tx_issue_pending_all dma_issue_pending_all
/** /**
......
...@@ -23,6 +23,7 @@ struct dw_dma; ...@@ -23,6 +23,7 @@ struct dw_dma;
/** /**
* struct dw_dma_chip - representation of DesignWare DMA controller hardware * struct dw_dma_chip - representation of DesignWare DMA controller hardware
* @dev: struct device of the DMA controller * @dev: struct device of the DMA controller
* @id: instance ID
* @irq: irq line * @irq: irq line
* @regs: memory mapped I/O space * @regs: memory mapped I/O space
* @clk: hclk clock * @clk: hclk clock
...@@ -31,6 +32,7 @@ struct dw_dma; ...@@ -31,6 +32,7 @@ struct dw_dma;
*/ */
struct dw_dma_chip { struct dw_dma_chip {
struct device *dev; struct device *dev;
int id;
int irq; int irq;
void __iomem *regs; void __iomem *regs;
struct clk *clk; struct clk *clk;
......
...@@ -894,6 +894,17 @@ static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_memset( ...@@ -894,6 +894,17 @@ static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_memset(
len, flags); len, flags);
} }
static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_memcpy(
struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
size_t len, unsigned long flags)
{
if (!chan || !chan->device || !chan->device->device_prep_dma_memcpy)
return NULL;
return chan->device->device_prep_dma_memcpy(chan, dest, src,
len, flags);
}
static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_sg( static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_sg(
struct dma_chan *chan, struct dma_chan *chan,
struct scatterlist *dst_sg, unsigned int dst_nents, struct scatterlist *dst_sg, unsigned int dst_nents,
......
...@@ -41,6 +41,7 @@ struct dw_dma_slave { ...@@ -41,6 +41,7 @@ struct dw_dma_slave {
* @is_private: The device channels should be marked as private and not for * @is_private: The device channels should be marked as private and not for
* by the general purpose DMA channel allocator. * by the general purpose DMA channel allocator.
* @is_memcpy: The device channels do support memory-to-memory transfers. * @is_memcpy: The device channels do support memory-to-memory transfers.
* @is_idma32: The type of the DMA controller is iDMA32
* @chan_allocation_order: Allocate channels starting from 0 or 7 * @chan_allocation_order: Allocate channels starting from 0 or 7
* @chan_priority: Set channel priority increasing from 0 to 7 or 7 to 0. * @chan_priority: Set channel priority increasing from 0 to 7 or 7 to 0.
* @block_size: Maximum block size supported by the controller * @block_size: Maximum block size supported by the controller
...@@ -53,6 +54,7 @@ struct dw_dma_platform_data { ...@@ -53,6 +54,7 @@ struct dw_dma_platform_data {
unsigned int nr_channels; unsigned int nr_channels;
bool is_private; bool is_private;
bool is_memcpy; bool is_memcpy;
bool is_idma32;
#define CHAN_ALLOCATION_ASCENDING 0 /* zero to seven */ #define CHAN_ALLOCATION_ASCENDING 0 /* zero to seven */
#define CHAN_ALLOCATION_DESCENDING 1 /* seven to zero */ #define CHAN_ALLOCATION_DESCENDING 1 /* seven to zero */
unsigned char chan_allocation_order; unsigned char chan_allocation_order;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment