Commit b859e7d1 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'spi-v3.19' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi

Pull spi updates from Mark Brown:
 "Not a huge amount going on this release, mainly new drivers (there's a
  couple more waiting that didn't quite make the cut for this release
  too):

   - An interface for querying if the current transfer is the last in a
     message, allowing controllers that need special handling for the
     final transfer to use the core message parsing.
   - Support for Amlogic Meson SPIFC, Imagination Technologies SFPI,
     Intel Quark X1000 and Samsung Exynos 7 controllers"

* tag 'spi-v3.19' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi: (38 commits)
  spi/s3c64xx: Remove redundant runtime PM management
  spi: fsl-spi: remove unused variable assignment
  spi: spi-fsl-spi: Return an error code in fsl_spi_do_one_msg()
  spi: core: Do not mangle error code from kthread_run()
  spi: fsl-espi: add (un)prepare_transfer_hardware calls to save power if SPI is not in use
  spi: fsl-(e)spi: migrate to generic master queueing
  spi/txx9: Deletion of an unnecessary check before the function call "clk_disable"
  spi: cadence: Fix 3-to-8 mux mode
  spi: cadence: Init HW after reading devicetree attributes
  spi: meson: Select REGMAP_MMIO
  spi: s3c64xx: add support for exynos7 SPI controller
  spi: spi-pxa2xx: SPI support for Intel Quark X1000
  spi: meson: meson_spifc_setup_speed() can be static
  spi: spi-pxa2xx: Add helpers for regiseters' accessing
  spi: spi-mxs: Fix mapping from vmalloc-ed buffer to scatter list
  spi: atmel: introduce probe deferring
  spi: atmel: remove compat for non DT board when requesting dma chan
  spi: meson: Add support for Amlogic Meson SPIFC
  spi: meson: Add device tree bindings documentation for SPIFC
  spi: core: Add spi_transfer_is_last() helper
  ...
parents 709d9f09 0e647037
......@@ -8,8 +8,10 @@ Required properties:
- gpio-sck: GPIO spec for the SCK line to use
- gpio-miso: GPIO spec for the MISO line to use
- gpio-mosi: GPIO spec for the MOSI line to use
- cs-gpios: GPIOs to use for chipselect lines
- num-chipselects: number of chipselect lines
- cs-gpios: GPIOs to use for chipselect lines.
Not needed if num-chipselects = <0>.
- num-chipselects: Number of chipselect lines. Should be <0> if a single device
with no chip select is connected.
Example:
......
IMG Synchronous Peripheral Flash Interface (SPFI) controller
Required properties:
- compatible: Must be "img,spfi".
- reg: Must contain the base address and length of the SPFI registers.
- interrupts: Must contain the SPFI interrupt.
- clocks: Must contain an entry for each entry in clock-names.
See ../clock/clock-bindings.txt for details.
- clock-names: Must include the following entries:
- spfi: SPI operating clock
- sys: SPI system interface clock
- dmas: Must contain an entry for each entry in dma-names.
See ../dma/dma.txt for details.
- dma-names: Must include the following entries:
- rx
- tx
- #address-cells: Must be 1.
- #size-cells: Must be 0.
Optional properties:
- img,supports-quad-mode: Should be set if the interface supports quad mode
SPI transfers.
Example:
spi@18100f00 {
compatible = "img,spfi";
reg = <0x18100f00 0x100>;
interrupts = <GIC_SHARED 22 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&spi_clk>, <&system_clk>;
clock-names = "spfi", "sys";
dmas = <&mdc 9 0xffffffff 0>, <&mdc 10 0xffffffff 0>;
dma-names = "rx", "tx";
#address-cells = <1>;
#size-cells = <0>;
};
Amlogic Meson SPI controllers
* SPIFC (SPI Flash Controller)
The Meson SPIFC is a controller optimized for communication with SPI
NOR memories, without DMA support and a 64-byte unified transmit /
receive buffer.
Required properties:
- compatible: should be "amlogic,meson6-spifc"
- reg: physical base address and length of the controller registers
- clocks: phandle of the input clock for the baud rate generator
- #address-cells: should be 1
- #size-cells: should be 0
spi@c1108c80 {
compatible = "amlogic,meson6-spifc";
reg = <0xc1108c80 0x80>;
clocks = <&clk81>;
#address-cells = <1>;
#size-cells = <0>;
};
......@@ -9,7 +9,7 @@ Required SoC Specific Properties:
- samsung,s3c2443-spi: for s3c2443, s3c2416 and s3c2450 platforms
- samsung,s3c6410-spi: for s3c6410 platforms
- samsung,s5pv210-spi: for s5pv210 and s5pc110 platforms
- samsung,exynos4210-spi: for exynos4 and exynos5 platforms
- samsung,exynos7-spi: for exynos7 platforms
- reg: physical base address of the controller and length of memory mapped
region.
......
......@@ -225,6 +225,13 @@ config SPI_GPIO
GPIO operations, you should be able to leverage that for better
speed with a custom version of this driver; see the source code.
config SPI_IMG_SPFI
tristate "IMG SPFI controller"
depends on MIPS || COMPILE_TEST
help
This enables support for the SPFI master controller found on
IMG SoCs.
config SPI_IMX
tristate "Freescale i.MX SPI controllers"
depends on ARCH_MXC || COMPILE_TEST
......@@ -301,6 +308,14 @@ config SPI_FSL_ESPI
From MPC8536, 85xx platform uses the controller, and all P10xx,
P20xx, P30xx,P40xx, P50xx uses this controller.
config SPI_MESON_SPIFC
tristate "Amlogic Meson SPIFC controller"
depends on ARCH_MESON || COMPILE_TEST
select REGMAP_MMIO
help
This enables master mode support for the SPIFC (SPI flash
controller) available in Amlogic Meson SoCs.
config SPI_OC_TINY
tristate "OpenCores tiny SPI"
depends on GPIOLIB
......@@ -444,7 +459,7 @@ config SPI_S3C24XX_FIQ
config SPI_S3C64XX
tristate "Samsung S3C64XX series type SPI"
depends on PLAT_SAMSUNG
depends on (PLAT_SAMSUNG || ARCH_EXYNOS)
select S3C64XX_PL080 if ARCH_S3C64XX
help
SPI driver for Samsung S3C64XX and newer SoCs.
......
......@@ -40,8 +40,10 @@ obj-$(CONFIG_SPI_FSL_LIB) += spi-fsl-lib.o
obj-$(CONFIG_SPI_FSL_ESPI) += spi-fsl-espi.o
obj-$(CONFIG_SPI_FSL_SPI) += spi-fsl-spi.o
obj-$(CONFIG_SPI_GPIO) += spi-gpio.o
obj-$(CONFIG_SPI_IMG_SPFI) += spi-img-spfi.o
obj-$(CONFIG_SPI_IMX) += spi-imx.o
obj-$(CONFIG_SPI_LM70_LLP) += spi-lm70llp.o
obj-$(CONFIG_SPI_MESON_SPIFC) += spi-meson-spifc.o
obj-$(CONFIG_SPI_MPC512x_PSC) += spi-mpc512x-psc.o
obj-$(CONFIG_SPI_MPC52xx_PSC) += spi-mpc52xx-psc.o
obj-$(CONFIG_SPI_MPC52xx) += spi-mpc52xx.o
......
......@@ -26,6 +26,7 @@
#include <linux/io.h>
#include <linux/gpio.h>
#include <linux/pinctrl/consumer.h>
#include <linux/pm_runtime.h>
/* SPI register offsets */
#define SPI_CR 0x0000
......@@ -191,6 +192,8 @@
#define SPI_DMA_TIMEOUT (msecs_to_jiffies(1000))
#define AUTOSUSPEND_TIMEOUT 2000
struct atmel_spi_dma {
struct dma_chan *chan_rx;
struct dma_chan *chan_tx;
......@@ -414,23 +417,6 @@ static int atmel_spi_dma_slave_config(struct atmel_spi *as,
return err;
}
static bool filter(struct dma_chan *chan, void *pdata)
{
struct atmel_spi_dma *sl_pdata = pdata;
struct at_dma_slave *sl;
if (!sl_pdata)
return false;
sl = &sl_pdata->dma_slave;
if (sl->dma_dev == chan->device->dev) {
chan->private = sl;
return true;
} else {
return false;
}
}
static int atmel_spi_configure_dma(struct atmel_spi *as)
{
struct dma_slave_config slave_config;
......@@ -441,19 +427,24 @@ static int atmel_spi_configure_dma(struct atmel_spi *as)
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);
as->dma.chan_tx = dma_request_slave_channel_compat(mask, filter,
&as->dma,
dev, "tx");
if (!as->dma.chan_tx) {
as->dma.chan_tx = dma_request_slave_channel_reason(dev, "tx");
if (IS_ERR(as->dma.chan_tx)) {
err = PTR_ERR(as->dma.chan_tx);
if (err == -EPROBE_DEFER) {
dev_warn(dev, "no DMA channel available at the moment\n");
return err;
}
dev_err(dev,
"DMA TX channel not available, SPI unable to use DMA\n");
err = -EBUSY;
goto error;
}
as->dma.chan_rx = dma_request_slave_channel_compat(mask, filter,
&as->dma,
dev, "rx");
/*
* No reason to check EPROBE_DEFER here since we have already requested
* tx channel. If it fails here, it's for another reason.
*/
as->dma.chan_rx = dma_request_slave_channel(dev, "rx");
if (!as->dma.chan_rx) {
dev_err(dev,
......@@ -474,7 +465,7 @@ static int atmel_spi_configure_dma(struct atmel_spi *as)
error:
if (as->dma.chan_rx)
dma_release_channel(as->dma.chan_rx);
if (as->dma.chan_tx)
if (!IS_ERR(as->dma.chan_tx))
dma_release_channel(as->dma.chan_tx);
return err;
}
......@@ -482,11 +473,9 @@ static int atmel_spi_configure_dma(struct atmel_spi *as)
static void atmel_spi_stop_dma(struct atmel_spi *as)
{
if (as->dma.chan_rx)
as->dma.chan_rx->device->device_control(as->dma.chan_rx,
DMA_TERMINATE_ALL, 0);
dmaengine_terminate_all(as->dma.chan_rx);
if (as->dma.chan_tx)
as->dma.chan_tx->device->device_control(as->dma.chan_tx,
DMA_TERMINATE_ALL, 0);
dmaengine_terminate_all(as->dma.chan_tx);
}
static void atmel_spi_release_dma(struct atmel_spi *as)
......@@ -1315,6 +1304,7 @@ static int atmel_spi_probe(struct platform_device *pdev)
master->setup = atmel_spi_setup;
master->transfer_one_message = atmel_spi_transfer_one_message;
master->cleanup = atmel_spi_cleanup;
master->auto_runtime_pm = true;
platform_set_drvdata(pdev, master);
as = spi_master_get_devdata(master);
......@@ -1347,8 +1337,11 @@ static int atmel_spi_probe(struct platform_device *pdev)
as->use_dma = false;
as->use_pdc = false;
if (as->caps.has_dma_support) {
if (atmel_spi_configure_dma(as) == 0)
ret = atmel_spi_configure_dma(as);
if (ret == 0)
as->use_dma = true;
else if (ret == -EPROBE_DEFER)
return ret;
} else {
as->use_pdc = true;
}
......@@ -1387,6 +1380,11 @@ static int atmel_spi_probe(struct platform_device *pdev)
dev_info(&pdev->dev, "Atmel SPI Controller at 0x%08lx (irq %d)\n",
(unsigned long)regs->start, irq);
pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
pm_runtime_use_autosuspend(&pdev->dev);
pm_runtime_set_active(&pdev->dev);
pm_runtime_enable(&pdev->dev);
ret = devm_spi_register_master(&pdev->dev, master);
if (ret)
goto out_free_dma;
......@@ -1394,6 +1392,9 @@ static int atmel_spi_probe(struct platform_device *pdev)
return 0;
out_free_dma:
pm_runtime_disable(&pdev->dev);
pm_runtime_set_suspended(&pdev->dev);
if (as->use_dma)
atmel_spi_release_dma(as);
......@@ -1415,6 +1416,8 @@ static int atmel_spi_remove(struct platform_device *pdev)
struct spi_master *master = platform_get_drvdata(pdev);
struct atmel_spi *as = spi_master_get_devdata(master);
pm_runtime_get_sync(&pdev->dev);
/* reset the hardware and block queue progress */
spin_lock_irq(&as->lock);
if (as->use_dma) {
......@@ -1432,14 +1435,37 @@ static int atmel_spi_remove(struct platform_device *pdev)
clk_disable_unprepare(as->clk);
pm_runtime_put_noidle(&pdev->dev);
pm_runtime_disable(&pdev->dev);
return 0;
}
#ifdef CONFIG_PM_SLEEP
#ifdef CONFIG_PM
static int atmel_spi_runtime_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct atmel_spi *as = spi_master_get_devdata(master);
clk_disable_unprepare(as->clk);
pinctrl_pm_select_sleep_state(dev);
return 0;
}
static int atmel_spi_runtime_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct atmel_spi *as = spi_master_get_devdata(master);
pinctrl_pm_select_default_state(dev);
return clk_prepare_enable(as->clk);
}
static int atmel_spi_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct atmel_spi *as = spi_master_get_devdata(master);
struct spi_master *master = dev_get_drvdata(dev);
int ret;
/* Stop the queue running */
......@@ -1449,22 +1475,22 @@ static int atmel_spi_suspend(struct device *dev)
return ret;
}
clk_disable_unprepare(as->clk);
pinctrl_pm_select_sleep_state(dev);
if (!pm_runtime_suspended(dev))
atmel_spi_runtime_suspend(dev);
return 0;
}
static int atmel_spi_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct atmel_spi *as = spi_master_get_devdata(master);
struct spi_master *master = dev_get_drvdata(dev);
int ret;
pinctrl_pm_select_default_state(dev);
clk_prepare_enable(as->clk);
if (!pm_runtime_suspended(dev)) {
ret = atmel_spi_runtime_resume(dev);
if (ret)
return ret;
}
/* Start the queue running */
ret = spi_master_resume(master);
......@@ -1474,8 +1500,11 @@ static int atmel_spi_resume(struct device *dev)
return ret;
}
static SIMPLE_DEV_PM_OPS(atmel_spi_pm_ops, atmel_spi_suspend, atmel_spi_resume);
static const struct dev_pm_ops atmel_spi_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(atmel_spi_suspend, atmel_spi_resume)
SET_RUNTIME_PM_OPS(atmel_spi_runtime_suspend,
atmel_spi_runtime_resume, NULL)
};
#define ATMEL_SPI_PM_OPS (&atmel_spi_pm_ops)
#else
#define ATMEL_SPI_PM_OPS NULL
......
......@@ -47,6 +47,7 @@
#define CDNS_SPI_CR_CPHA_MASK 0x00000004 /* Clock Phase Control */
#define CDNS_SPI_CR_CPOL_MASK 0x00000002 /* Clock Polarity Control */
#define CDNS_SPI_CR_SSCTRL_MASK 0x00003C00 /* Slave Select Mask */
#define CDNS_SPI_CR_PERI_SEL_MASK 0x00000200 /* Peripheral Select Decode */
#define CDNS_SPI_CR_BAUD_DIV_MASK 0x00000038 /* Baud Rate Divisor Mask */
#define CDNS_SPI_CR_MSTREN_MASK 0x00000001 /* Master Enable Mask */
#define CDNS_SPI_CR_MANSTRTEN_MASK 0x00008000 /* Manual TX Enable Mask */
......@@ -148,6 +149,11 @@ static inline void cdns_spi_write(struct cdns_spi *xspi, u32 offset, u32 val)
*/
static void cdns_spi_init_hw(struct cdns_spi *xspi)
{
u32 ctrl_reg = CDNS_SPI_CR_DEFAULT_MASK;
if (xspi->is_decoded_cs)
ctrl_reg |= CDNS_SPI_CR_PERI_SEL_MASK;
cdns_spi_write(xspi, CDNS_SPI_ER_OFFSET,
CDNS_SPI_ER_DISABLE_MASK);
cdns_spi_write(xspi, CDNS_SPI_IDR_OFFSET,
......@@ -160,8 +166,7 @@ static void cdns_spi_init_hw(struct cdns_spi *xspi)
cdns_spi_write(xspi, CDNS_SPI_ISR_OFFSET,
CDNS_SPI_IXR_ALL_MASK);
cdns_spi_write(xspi, CDNS_SPI_CR_OFFSET,
CDNS_SPI_CR_DEFAULT_MASK);
cdns_spi_write(xspi, CDNS_SPI_CR_OFFSET, ctrl_reg);
cdns_spi_write(xspi, CDNS_SPI_ER_OFFSET,
CDNS_SPI_ER_ENABLE_MASK);
}
......@@ -516,6 +521,17 @@ static int cdns_spi_probe(struct platform_device *pdev)
goto clk_dis_apb;
}
ret = of_property_read_u32(pdev->dev.of_node, "num-cs", &num_cs);
if (ret < 0)
master->num_chipselect = CDNS_SPI_DEFAULT_NUM_CS;
else
master->num_chipselect = num_cs;
ret = of_property_read_u32(pdev->dev.of_node, "is-decoded-cs",
&xspi->is_decoded_cs);
if (ret < 0)
xspi->is_decoded_cs = 0;
/* SPI controller initializations */
cdns_spi_init_hw(xspi);
......@@ -534,19 +550,6 @@ static int cdns_spi_probe(struct platform_device *pdev)
goto remove_master;
}
ret = of_property_read_u32(pdev->dev.of_node, "num-cs", &num_cs);
if (ret < 0)
master->num_chipselect = CDNS_SPI_DEFAULT_NUM_CS;
else
master->num_chipselect = num_cs;
ret = of_property_read_u32(pdev->dev.of_node, "is-decoded-cs",
&xspi->is_decoded_cs);
if (ret < 0)
xspi->is_decoded_cs = 0;
master->prepare_transfer_hardware = cdns_prepare_transfer_hardware;
master->prepare_message = cdns_prepare_message;
master->transfer_one = cdns_transfer_one;
......
......@@ -26,6 +26,9 @@
#include <linux/intel_mid_dma.h>
#include <linux/pci.h>
#define RX_BUSY 0
#define TX_BUSY 1
struct mid_dma {
struct intel_mid_dma_slave dmas_tx;
struct intel_mid_dma_slave dmas_rx;
......@@ -98,41 +101,26 @@ static void mid_spi_dma_exit(struct dw_spi *dws)
}
/*
* dws->dma_chan_done is cleared before the dma transfer starts,
* callback for rx/tx channel will each increment it by 1.
* Reaching 2 means the whole spi transaction is done.
* dws->dma_chan_busy is set before the dma transfer starts, callback for tx
* channel will clear a corresponding bit.
*/
static void dw_spi_dma_done(void *arg)
static void dw_spi_dma_tx_done(void *arg)
{
struct dw_spi *dws = arg;
if (++dws->dma_chan_done != 2)
if (test_and_clear_bit(TX_BUSY, &dws->dma_chan_busy) & BIT(RX_BUSY))
return;
dw_spi_xfer_done(dws);
}
static int mid_spi_dma_transfer(struct dw_spi *dws, int cs_change)
static struct dma_async_tx_descriptor *dw_spi_dma_prepare_tx(struct dw_spi *dws)
{
struct dma_async_tx_descriptor *txdesc, *rxdesc;
struct dma_slave_config txconf, rxconf;
u16 dma_ctrl = 0;
/* 1. setup DMA related registers */
if (cs_change) {
spi_enable_chip(dws, 0);
dw_writew(dws, DW_SPI_DMARDLR, 0xf);
dw_writew(dws, DW_SPI_DMATDLR, 0x10);
if (dws->tx_dma)
dma_ctrl |= SPI_DMA_TDMAE;
if (dws->rx_dma)
dma_ctrl |= SPI_DMA_RDMAE;
dw_writew(dws, DW_SPI_DMACR, dma_ctrl);
spi_enable_chip(dws, 1);
}
struct dma_slave_config txconf;
struct dma_async_tx_descriptor *txdesc;
dws->dma_chan_done = 0;
if (!dws->tx_dma)
return NULL;
/* 2. Prepare the TX dma transfer */
txconf.direction = DMA_MEM_TO_DEV;
txconf.dst_addr = dws->dma_addr;
txconf.dst_maxburst = LNW_DMA_MSIZE_16;
......@@ -151,10 +139,33 @@ static int mid_spi_dma_transfer(struct dw_spi *dws, int cs_change)
1,
DMA_MEM_TO_DEV,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
txdesc->callback = dw_spi_dma_done;
txdesc->callback = dw_spi_dma_tx_done;
txdesc->callback_param = dws;
/* 3. Prepare the RX dma transfer */
return txdesc;
}
/*
* dws->dma_chan_busy is set before the dma transfer starts, callback for rx
* channel will clear a corresponding bit.
*/
static void dw_spi_dma_rx_done(void *arg)
{
struct dw_spi *dws = arg;
if (test_and_clear_bit(RX_BUSY, &dws->dma_chan_busy) & BIT(TX_BUSY))
return;
dw_spi_xfer_done(dws);
}
static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws)
{
struct dma_slave_config rxconf;
struct dma_async_tx_descriptor *rxdesc;
if (!dws->rx_dma)
return NULL;
rxconf.direction = DMA_DEV_TO_MEM;
rxconf.src_addr = dws->dma_addr;
rxconf.src_maxburst = LNW_DMA_MSIZE_16;
......@@ -173,15 +184,56 @@ static int mid_spi_dma_transfer(struct dw_spi *dws, int cs_change)
1,
DMA_DEV_TO_MEM,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
rxdesc->callback = dw_spi_dma_done;
rxdesc->callback = dw_spi_dma_rx_done;
rxdesc->callback_param = dws;
return rxdesc;
}
static void dw_spi_dma_setup(struct dw_spi *dws)
{
u16 dma_ctrl = 0;
spi_enable_chip(dws, 0);
dw_writew(dws, DW_SPI_DMARDLR, 0xf);
dw_writew(dws, DW_SPI_DMATDLR, 0x10);
if (dws->tx_dma)
dma_ctrl |= SPI_DMA_TDMAE;
if (dws->rx_dma)
dma_ctrl |= SPI_DMA_RDMAE;
dw_writew(dws, DW_SPI_DMACR, dma_ctrl);
spi_enable_chip(dws, 1);
}
static int mid_spi_dma_transfer(struct dw_spi *dws, int cs_change)
{
struct dma_async_tx_descriptor *txdesc, *rxdesc;
/* 1. setup DMA related registers */
if (cs_change)
dw_spi_dma_setup(dws);
/* 2. Prepare the TX dma transfer */
txdesc = dw_spi_dma_prepare_tx(dws);
/* 3. Prepare the RX dma transfer */
rxdesc = dw_spi_dma_prepare_rx(dws);
/* rx must be started before tx due to spi instinct */
dmaengine_submit(rxdesc);
dma_async_issue_pending(dws->rxchan);
if (rxdesc) {
set_bit(RX_BUSY, &dws->dma_chan_busy);
dmaengine_submit(rxdesc);
dma_async_issue_pending(dws->rxchan);
}
dmaengine_submit(txdesc);
dma_async_issue_pending(dws->txchan);
if (txdesc) {
set_bit(TX_BUSY, &dws->dma_chan_busy);
dmaengine_submit(txdesc);
dma_async_issue_pending(dws->txchan);
}
return 0;
}
......
......@@ -139,7 +139,7 @@ struct dw_spi {
struct scatterlist tx_sgl;
struct dma_chan *rxchan;
struct scatterlist rx_sgl;
int dma_chan_done;
unsigned long dma_chan_busy;
struct device *dma_dev;
dma_addr_t dma_addr; /* phy address of the Data register */
struct dw_spi_dma_ops *dma_ops;
......
......@@ -56,12 +56,15 @@ void fsl_spi_cpm_reinit_txrx(struct mpc8xxx_spi *mspi)
qe_issue_cmd(QE_INIT_TX_RX, mspi->subblock,
QE_CR_PROTOCOL_UNSPECIFIED, 0);
} else {
cpm_command(CPM_SPI_CMD, CPM_CR_INIT_TRX);
if (mspi->flags & SPI_CPM1) {
out_be32(&mspi->pram->rstate, 0);
out_be16(&mspi->pram->rbptr,
in_be16(&mspi->pram->rbase));
out_be32(&mspi->pram->tstate, 0);
out_be16(&mspi->pram->tbptr,
in_be16(&mspi->pram->tbase));
} else {
cpm_command(CPM_SPI_CMD, CPM_CR_INIT_TRX);
}
}
}
......
......@@ -438,7 +438,7 @@ static int dspi_resume(struct device *dev)
static SIMPLE_DEV_PM_OPS(dspi_pm, dspi_suspend, dspi_resume);
static struct regmap_config dspi_regmap_config = {
static const struct regmap_config dspi_regmap_config = {
.reg_bits = 32,
.val_bits = 32,
.reg_stride = 4,
......@@ -492,7 +492,6 @@ static int dspi_probe(struct platform_device *pdev)
goto out_master_put;
}
dspi_regmap_config.lock_arg = dspi;
dspi->regmap = devm_regmap_init_mmio_clk(&pdev->dev, "dspi", base,
&dspi_regmap_config);
if (IS_ERR(dspi->regmap)) {
......
......@@ -411,7 +411,8 @@ static void fsl_espi_rw_trans(struct spi_message *m,
kfree(local_buf);
}
static void fsl_espi_do_one_msg(struct spi_message *m)
static int fsl_espi_do_one_msg(struct spi_master *master,
struct spi_message *m)
{
struct spi_transfer *t;
u8 *rx_buf = NULL;
......@@ -441,8 +442,8 @@ static void fsl_espi_do_one_msg(struct spi_message *m)
m->actual_length = espi_trans.actual_length;
m->status = espi_trans.status;
if (m->complete)
m->complete(m->context);
spi_finalize_current_message(master);
return 0;
}
static int fsl_espi_setup(struct spi_device *spi)
......@@ -587,6 +588,38 @@ static void fsl_espi_remove(struct mpc8xxx_spi *mspi)
iounmap(mspi->reg_base);
}
static int fsl_espi_suspend(struct spi_master *master)
{
struct mpc8xxx_spi *mpc8xxx_spi;
struct fsl_espi_reg *reg_base;
u32 regval;
mpc8xxx_spi = spi_master_get_devdata(master);
reg_base = mpc8xxx_spi->reg_base;
regval = mpc8xxx_spi_read_reg(&reg_base->mode);
regval &= ~SPMODE_ENABLE;
mpc8xxx_spi_write_reg(&reg_base->mode, regval);
return 0;
}
static int fsl_espi_resume(struct spi_master *master)
{
struct mpc8xxx_spi *mpc8xxx_spi;
struct fsl_espi_reg *reg_base;
u32 regval;
mpc8xxx_spi = spi_master_get_devdata(master);
reg_base = mpc8xxx_spi->reg_base;
regval = mpc8xxx_spi_read_reg(&reg_base->mode);
regval |= SPMODE_ENABLE;
mpc8xxx_spi_write_reg(&reg_base->mode, regval);
return 0;
}
static struct spi_master * fsl_espi_probe(struct device *dev,
struct resource *mem, unsigned int irq)
{
......@@ -607,16 +640,16 @@ static struct spi_master * fsl_espi_probe(struct device *dev,
dev_set_drvdata(dev, master);
ret = mpc8xxx_spi_probe(dev, mem, irq);
if (ret)
goto err_probe;
mpc8xxx_spi_probe(dev, mem, irq);
master->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 16);
master->setup = fsl_espi_setup;
master->cleanup = fsl_espi_cleanup;
master->transfer_one_message = fsl_espi_do_one_msg;
master->prepare_transfer_hardware = fsl_espi_resume;
master->unprepare_transfer_hardware = fsl_espi_suspend;
mpc8xxx_spi = spi_master_get_devdata(master);
mpc8xxx_spi->spi_do_one_msg = fsl_espi_do_one_msg;
mpc8xxx_spi->spi_remove = fsl_espi_remove;
mpc8xxx_spi->reg_base = ioremap(mem->start, resource_size(mem));
......@@ -762,25 +795,15 @@ static int of_fsl_espi_remove(struct platform_device *dev)
static int of_fsl_espi_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct mpc8xxx_spi *mpc8xxx_spi;
struct fsl_espi_reg *reg_base;
u32 regval;
int ret;
mpc8xxx_spi = spi_master_get_devdata(master);
reg_base = mpc8xxx_spi->reg_base;
ret = spi_master_suspend(master);
if (ret) {
dev_warn(dev, "cannot suspend master\n");
return ret;
}
regval = mpc8xxx_spi_read_reg(&reg_base->mode);
regval &= ~SPMODE_ENABLE;
mpc8xxx_spi_write_reg(&reg_base->mode, regval);
return 0;
return fsl_espi_suspend(master);
}
static int of_fsl_espi_resume(struct device *dev)
......
......@@ -61,44 +61,6 @@ struct mpc8xxx_spi_probe_info *to_of_pinfo(struct fsl_spi_platform_data *pdata)
return container_of(pdata, struct mpc8xxx_spi_probe_info, pdata);
}
static void mpc8xxx_spi_work(struct work_struct *work)
{
struct mpc8xxx_spi *mpc8xxx_spi = container_of(work, struct mpc8xxx_spi,
work);
spin_lock_irq(&mpc8xxx_spi->lock);
while (!list_empty(&mpc8xxx_spi->queue)) {
struct spi_message *m = container_of(mpc8xxx_spi->queue.next,
struct spi_message, queue);
list_del_init(&m->queue);
spin_unlock_irq(&mpc8xxx_spi->lock);
if (mpc8xxx_spi->spi_do_one_msg)
mpc8xxx_spi->spi_do_one_msg(m);
spin_lock_irq(&mpc8xxx_spi->lock);
}
spin_unlock_irq(&mpc8xxx_spi->lock);
}
int mpc8xxx_spi_transfer(struct spi_device *spi,
struct spi_message *m)
{
struct mpc8xxx_spi *mpc8xxx_spi = spi_master_get_devdata(spi->master);
unsigned long flags;
m->actual_length = 0;
m->status = -EINPROGRESS;
spin_lock_irqsave(&mpc8xxx_spi->lock, flags);
list_add_tail(&m->queue, &mpc8xxx_spi->queue);
queue_work(mpc8xxx_spi->workqueue, &mpc8xxx_spi->work);
spin_unlock_irqrestore(&mpc8xxx_spi->lock, flags);
return 0;
}
const char *mpc8xxx_spi_strmode(unsigned int flags)
{
if (flags & SPI_QE_CPU_MODE) {
......@@ -114,13 +76,12 @@ const char *mpc8xxx_spi_strmode(unsigned int flags)
return "CPU";
}
int mpc8xxx_spi_probe(struct device *dev, struct resource *mem,
void mpc8xxx_spi_probe(struct device *dev, struct resource *mem,
unsigned int irq)
{
struct fsl_spi_platform_data *pdata = dev_get_platdata(dev);
struct spi_master *master;
struct mpc8xxx_spi *mpc8xxx_spi;
int ret = 0;
master = dev_get_drvdata(dev);
......@@ -128,7 +89,6 @@ int mpc8xxx_spi_probe(struct device *dev, struct resource *mem,
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH
| SPI_LSB_FIRST | SPI_LOOP;
master->transfer = mpc8xxx_spi_transfer;
master->dev.of_node = dev->of_node;
mpc8xxx_spi = spi_master_get_devdata(master);
......@@ -147,22 +107,7 @@ int mpc8xxx_spi_probe(struct device *dev, struct resource *mem,
master->bus_num = pdata->bus_num;
master->num_chipselect = pdata->max_chipselect;
spin_lock_init(&mpc8xxx_spi->lock);
init_completion(&mpc8xxx_spi->done);
INIT_WORK(&mpc8xxx_spi->work, mpc8xxx_spi_work);
INIT_LIST_HEAD(&mpc8xxx_spi->queue);
mpc8xxx_spi->workqueue = create_singlethread_workqueue(
dev_name(master->dev.parent));
if (mpc8xxx_spi->workqueue == NULL) {
ret = -EBUSY;
goto err;
}
return 0;
err:
return ret;
}
int mpc8xxx_spi_remove(struct device *dev)
......@@ -173,8 +118,6 @@ int mpc8xxx_spi_remove(struct device *dev)
master = dev_get_drvdata(dev);
mpc8xxx_spi = spi_master_get_devdata(master);
flush_workqueue(mpc8xxx_spi->workqueue);
destroy_workqueue(mpc8xxx_spi->workqueue);
spi_unregister_master(master);
free_irq(mpc8xxx_spi->irq, mpc8xxx_spi);
......
......@@ -55,7 +55,6 @@ struct mpc8xxx_spi {
u32(*get_tx) (struct mpc8xxx_spi *);
/* hooks for different controller driver */
void (*spi_do_one_msg) (struct spi_message *m);
void (*spi_remove) (struct mpc8xxx_spi *mspi);
unsigned int count;
......@@ -78,12 +77,6 @@ struct mpc8xxx_spi {
int bits_per_word, int msb_first);
#endif
struct workqueue_struct *workqueue;
struct work_struct work;
struct list_head queue;
spinlock_t lock;
struct completion done;
};
......@@ -123,9 +116,8 @@ extern struct mpc8xxx_spi_probe_info *to_of_pinfo(
struct fsl_spi_platform_data *pdata);
extern int mpc8xxx_spi_bufs(struct mpc8xxx_spi *mspi,
struct spi_transfer *t, unsigned int len);
extern int mpc8xxx_spi_transfer(struct spi_device *spi, struct spi_message *m);
extern const char *mpc8xxx_spi_strmode(unsigned int flags);
extern int mpc8xxx_spi_probe(struct device *dev, struct resource *mem,
extern void mpc8xxx_spi_probe(struct device *dev, struct resource *mem,
unsigned int irq);
extern int mpc8xxx_spi_remove(struct device *dev);
extern int of_mpc8xxx_spi_probe(struct platform_device *ofdev);
......
......@@ -353,7 +353,8 @@ static int fsl_spi_bufs(struct spi_device *spi, struct spi_transfer *t,
return mpc8xxx_spi->count;
}
static void fsl_spi_do_one_msg(struct spi_message *m)
static int fsl_spi_do_one_msg(struct spi_master *master,
struct spi_message *m)
{
struct spi_device *spi = m->spi;
struct spi_transfer *t, *first;
......@@ -367,10 +368,9 @@ static void fsl_spi_do_one_msg(struct spi_message *m)
list_for_each_entry(t, &m->transfers, transfer_list) {
if ((first->bits_per_word != t->bits_per_word) ||
(first->speed_hz != t->speed_hz)) {
status = -EINVAL;
dev_err(&spi->dev,
"bits_per_word/speed_hz should be same for the same SPI transfer\n");
return;
return -EINVAL;
}
}
......@@ -408,8 +408,7 @@ static void fsl_spi_do_one_msg(struct spi_message *m)
}
m->status = status;
if (m->complete)
m->complete(m->context);
spi_finalize_current_message(master);
if (status || !cs_change) {
ndelay(nsecs);
......@@ -417,6 +416,7 @@ static void fsl_spi_do_one_msg(struct spi_message *m)
}
fsl_spi_setup_transfer(spi, NULL);
return 0;
}
static int fsl_spi_setup(struct spi_device *spi)
......@@ -624,15 +624,13 @@ static struct spi_master * fsl_spi_probe(struct device *dev,
dev_set_drvdata(dev, master);
ret = mpc8xxx_spi_probe(dev, mem, irq);
if (ret)
goto err_probe;
mpc8xxx_spi_probe(dev, mem, irq);
master->setup = fsl_spi_setup;
master->cleanup = fsl_spi_cleanup;
master->transfer_one_message = fsl_spi_do_one_msg;
mpc8xxx_spi = spi_master_get_devdata(master);
mpc8xxx_spi->spi_do_one_msg = fsl_spi_do_one_msg;
mpc8xxx_spi->spi_remove = fsl_spi_remove;
mpc8xxx_spi->max_bits_per_word = 32;
mpc8xxx_spi->type = fsl_spi_get_type(dev);
......@@ -704,7 +702,6 @@ static struct spi_master * fsl_spi_probe(struct device *dev,
err_ioremap:
fsl_spi_cpm_free(mpc8xxx_spi);
err_cpm_init:
err_probe:
spi_master_put(master);
err:
return ERR_PTR(ret);
......
......@@ -48,7 +48,7 @@ struct spi_gpio {
struct spi_bitbang bitbang;
struct spi_gpio_platform_data pdata;
struct platform_device *pdev;
int cs_gpios[0];
unsigned long cs_gpios[0];
};
/*----------------------------------------------------------------------*/
......@@ -220,7 +220,7 @@ static u32 spi_gpio_spec_txrx_word_mode3(struct spi_device *spi,
static void spi_gpio_chipselect(struct spi_device *spi, int is_active)
{
struct spi_gpio *spi_gpio = spi_to_spi_gpio(spi);
unsigned int cs = spi_gpio->cs_gpios[spi->chip_select];
unsigned long cs = spi_gpio->cs_gpios[spi->chip_select];
/* set initial clock polarity */
if (is_active)
......@@ -234,7 +234,7 @@ static void spi_gpio_chipselect(struct spi_device *spi, int is_active)
static int spi_gpio_setup(struct spi_device *spi)
{
unsigned int cs;
unsigned long cs;
int status = 0;
struct spi_gpio *spi_gpio = spi_to_spi_gpio(spi);
struct device_node *np = spi->master->dev.of_node;
......@@ -249,7 +249,7 @@ static int spi_gpio_setup(struct spi_device *spi)
/*
* ... otherwise, take it from spi->controller_data
*/
cs = (unsigned int)(uintptr_t) spi->controller_data;
cs = (uintptr_t) spi->controller_data;
}
if (!spi->controller_state) {
......@@ -277,7 +277,7 @@ static int spi_gpio_setup(struct spi_device *spi)
static void spi_gpio_cleanup(struct spi_device *spi)
{
struct spi_gpio *spi_gpio = spi_to_spi_gpio(spi);
unsigned int cs = spi_gpio->cs_gpios[spi->chip_select];
unsigned long cs = spi_gpio->cs_gpios[spi->chip_select];
if (cs != SPI_GPIO_NO_CHIPSELECT)
gpio_free(cs);
......@@ -413,6 +413,7 @@ static int spi_gpio_probe(struct platform_device *pdev)
struct spi_gpio_platform_data *pdata;
u16 master_flags = 0;
bool use_of = 0;
int num_devices;
status = spi_gpio_probe_dt(pdev);
if (status < 0)
......@@ -422,16 +423,21 @@ static int spi_gpio_probe(struct platform_device *pdev)
pdata = dev_get_platdata(&pdev->dev);
#ifdef GENERIC_BITBANG
if (!pdata || !pdata->num_chipselect)
if (!pdata || (!use_of && !pdata->num_chipselect))
return -ENODEV;
#endif
if (use_of && !SPI_N_CHIPSEL)
num_devices = 1;
else
num_devices = SPI_N_CHIPSEL;
status = spi_gpio_request(pdata, dev_name(&pdev->dev), &master_flags);
if (status < 0)
return status;
master = spi_alloc_master(&pdev->dev, sizeof(*spi_gpio) +
(sizeof(int) * SPI_N_CHIPSEL));
(sizeof(unsigned long) * num_devices));
if (!master) {
status = -ENOMEM;
goto gpio_free;
......@@ -446,7 +452,7 @@ static int spi_gpio_probe(struct platform_device *pdev)
master->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 32);
master->flags = master_flags;
master->bus_num = pdev->id;
master->num_chipselect = SPI_N_CHIPSEL;
master->num_chipselect = num_devices;
master->setup = spi_gpio_setup;
master->cleanup = spi_gpio_cleanup;
#ifdef CONFIG_OF
......@@ -461,9 +467,18 @@ static int spi_gpio_probe(struct platform_device *pdev)
* property of the node.
*/
for (i = 0; i < SPI_N_CHIPSEL; i++)
spi_gpio->cs_gpios[i] =
of_get_named_gpio(np, "cs-gpios", i);
if (!SPI_N_CHIPSEL)
spi_gpio->cs_gpios[0] = SPI_GPIO_NO_CHIPSELECT;
else
for (i = 0; i < SPI_N_CHIPSEL; i++) {
status = of_get_named_gpio(np, "cs-gpios", i);
if (status < 0) {
dev_err(&pdev->dev,
"invalid cs-gpios property\n");
goto gpio_free;
}
spi_gpio->cs_gpios[i] = status;
}
}
#endif
......
/*
* IMG SPFI controller driver
*
* Copyright (C) 2007,2008,2013 Imagination Technologies Ltd.
* Copyright (C) 2014 Google, Inc.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/dmaengine.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/irq.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/scatterlist.h>
#include <linux/slab.h>
#include <linux/spi/spi.h>
#include <linux/spinlock.h>
#define SPFI_DEVICE_PARAMETER(x) (0x00 + 0x4 * (x))
#define SPFI_DEVICE_PARAMETER_BITCLK_SHIFT 24
#define SPFI_DEVICE_PARAMETER_BITCLK_MASK 0xff
#define SPFI_DEVICE_PARAMETER_CSSETUP_SHIFT 16
#define SPFI_DEVICE_PARAMETER_CSSETUP_MASK 0xff
#define SPFI_DEVICE_PARAMETER_CSHOLD_SHIFT 8
#define SPFI_DEVICE_PARAMETER_CSHOLD_MASK 0xff
#define SPFI_DEVICE_PARAMETER_CSDELAY_SHIFT 0
#define SPFI_DEVICE_PARAMETER_CSDELAY_MASK 0xff
#define SPFI_CONTROL 0x14
#define SPFI_CONTROL_CONTINUE BIT(12)
#define SPFI_CONTROL_SOFT_RESET BIT(11)
#define SPFI_CONTROL_SEND_DMA BIT(10)
#define SPFI_CONTROL_GET_DMA BIT(9)
#define SPFI_CONTROL_TMODE_SHIFT 5
#define SPFI_CONTROL_TMODE_MASK 0x7
#define SPFI_CONTROL_TMODE_SINGLE 0
#define SPFI_CONTROL_TMODE_DUAL 1
#define SPFI_CONTROL_TMODE_QUAD 2
#define SPFI_CONTROL_SPFI_EN BIT(0)
#define SPFI_TRANSACTION 0x18
#define SPFI_TRANSACTION_TSIZE_SHIFT 16
#define SPFI_TRANSACTION_TSIZE_MASK 0xffff
#define SPFI_PORT_STATE 0x1c
#define SPFI_PORT_STATE_DEV_SEL_SHIFT 20
#define SPFI_PORT_STATE_DEV_SEL_MASK 0x7
#define SPFI_PORT_STATE_CK_POL(x) BIT(19 - (x))
#define SPFI_PORT_STATE_CK_PHASE(x) BIT(14 - (x))
#define SPFI_TX_32BIT_VALID_DATA 0x20
#define SPFI_TX_8BIT_VALID_DATA 0x24
#define SPFI_RX_32BIT_VALID_DATA 0x28
#define SPFI_RX_8BIT_VALID_DATA 0x2c
#define SPFI_INTERRUPT_STATUS 0x30
#define SPFI_INTERRUPT_ENABLE 0x34
#define SPFI_INTERRUPT_CLEAR 0x38
#define SPFI_INTERRUPT_IACCESS BIT(12)
#define SPFI_INTERRUPT_GDEX8BIT BIT(11)
#define SPFI_INTERRUPT_ALLDONETRIG BIT(9)
#define SPFI_INTERRUPT_GDFUL BIT(8)
#define SPFI_INTERRUPT_GDHF BIT(7)
#define SPFI_INTERRUPT_GDEX32BIT BIT(6)
#define SPFI_INTERRUPT_GDTRIG BIT(5)
#define SPFI_INTERRUPT_SDFUL BIT(3)
#define SPFI_INTERRUPT_SDHF BIT(2)
#define SPFI_INTERRUPT_SDE BIT(1)
#define SPFI_INTERRUPT_SDTRIG BIT(0)
/*
* There are four parallel FIFOs of 16 bytes each. The word buffer
* (*_32BIT_VALID_DATA) accesses all four FIFOs at once, resulting in an
* effective FIFO size of 64 bytes. The byte buffer (*_8BIT_VALID_DATA)
* accesses only a single FIFO, resulting in an effective FIFO size of
* 16 bytes.
*/
#define SPFI_32BIT_FIFO_SIZE 64
#define SPFI_8BIT_FIFO_SIZE 16
struct img_spfi {
struct device *dev;
struct spi_master *master;
spinlock_t lock;
void __iomem *regs;
phys_addr_t phys;
int irq;
struct clk *spfi_clk;
struct clk *sys_clk;
struct dma_chan *rx_ch;
struct dma_chan *tx_ch;
bool tx_dma_busy;
bool rx_dma_busy;
};
static inline u32 spfi_readl(struct img_spfi *spfi, u32 reg)
{
return readl(spfi->regs + reg);
}
static inline void spfi_writel(struct img_spfi *spfi, u32 val, u32 reg)
{
writel(val, spfi->regs + reg);
}
static inline void spfi_start(struct img_spfi *spfi)
{
u32 val;
val = spfi_readl(spfi, SPFI_CONTROL);
val |= SPFI_CONTROL_SPFI_EN;
spfi_writel(spfi, val, SPFI_CONTROL);
}
static inline void spfi_stop(struct img_spfi *spfi)
{
u32 val;
val = spfi_readl(spfi, SPFI_CONTROL);
val &= ~SPFI_CONTROL_SPFI_EN;
spfi_writel(spfi, val, SPFI_CONTROL);
}
static inline void spfi_reset(struct img_spfi *spfi)
{
spfi_writel(spfi, SPFI_CONTROL_SOFT_RESET, SPFI_CONTROL);
udelay(1);
spfi_writel(spfi, 0, SPFI_CONTROL);
}
static void spfi_flush_tx_fifo(struct img_spfi *spfi)
{
unsigned long timeout = jiffies + msecs_to_jiffies(10);
spfi_writel(spfi, SPFI_INTERRUPT_SDE, SPFI_INTERRUPT_CLEAR);
while (time_before(jiffies, timeout)) {
if (spfi_readl(spfi, SPFI_INTERRUPT_STATUS) &
SPFI_INTERRUPT_SDE)
return;
cpu_relax();
}
dev_err(spfi->dev, "Timed out waiting for FIFO to drain\n");
spfi_reset(spfi);
}
static unsigned int spfi_pio_write32(struct img_spfi *spfi, const u32 *buf,
unsigned int max)
{
unsigned int count = 0;
u32 status;
while (count < max) {
spfi_writel(spfi, SPFI_INTERRUPT_SDFUL, SPFI_INTERRUPT_CLEAR);
status = spfi_readl(spfi, SPFI_INTERRUPT_STATUS);
if (status & SPFI_INTERRUPT_SDFUL)
break;
spfi_writel(spfi, buf[count / 4], SPFI_TX_32BIT_VALID_DATA);
count += 4;
}
return count;
}
static unsigned int spfi_pio_write8(struct img_spfi *spfi, const u8 *buf,
unsigned int max)
{
unsigned int count = 0;
u32 status;
while (count < max) {
spfi_writel(spfi, SPFI_INTERRUPT_SDFUL, SPFI_INTERRUPT_CLEAR);
status = spfi_readl(spfi, SPFI_INTERRUPT_STATUS);
if (status & SPFI_INTERRUPT_SDFUL)
break;
spfi_writel(spfi, buf[count], SPFI_TX_8BIT_VALID_DATA);
count++;
}
return count;
}
static unsigned int spfi_pio_read32(struct img_spfi *spfi, u32 *buf,
unsigned int max)
{
unsigned int count = 0;
u32 status;
while (count < max) {
spfi_writel(spfi, SPFI_INTERRUPT_GDEX32BIT,
SPFI_INTERRUPT_CLEAR);
status = spfi_readl(spfi, SPFI_INTERRUPT_STATUS);
if (!(status & SPFI_INTERRUPT_GDEX32BIT))
break;
buf[count / 4] = spfi_readl(spfi, SPFI_RX_32BIT_VALID_DATA);
count += 4;
}
return count;
}
static unsigned int spfi_pio_read8(struct img_spfi *spfi, u8 *buf,
unsigned int max)
{
unsigned int count = 0;
u32 status;
while (count < max) {
spfi_writel(spfi, SPFI_INTERRUPT_GDEX8BIT,
SPFI_INTERRUPT_CLEAR);
status = spfi_readl(spfi, SPFI_INTERRUPT_STATUS);
if (!(status & SPFI_INTERRUPT_GDEX8BIT))
break;
buf[count] = spfi_readl(spfi, SPFI_RX_8BIT_VALID_DATA);
count++;
}
return count;
}
static int img_spfi_start_pio(struct spi_master *master,
struct spi_device *spi,
struct spi_transfer *xfer)
{
struct img_spfi *spfi = spi_master_get_devdata(spi->master);
unsigned int tx_bytes = 0, rx_bytes = 0;
const void *tx_buf = xfer->tx_buf;
void *rx_buf = xfer->rx_buf;
unsigned long timeout;
if (tx_buf)
tx_bytes = xfer->len;
if (rx_buf)
rx_bytes = xfer->len;
spfi_start(spfi);
timeout = jiffies +
msecs_to_jiffies(xfer->len * 8 * 1000 / xfer->speed_hz + 100);
while ((tx_bytes > 0 || rx_bytes > 0) &&
time_before(jiffies, timeout)) {
unsigned int tx_count, rx_count;
switch (xfer->bits_per_word) {
case 32:
tx_count = spfi_pio_write32(spfi, tx_buf, tx_bytes);
rx_count = spfi_pio_read32(spfi, rx_buf, rx_bytes);
break;
case 8:
default:
tx_count = spfi_pio_write8(spfi, tx_buf, tx_bytes);
rx_count = spfi_pio_read8(spfi, rx_buf, rx_bytes);
break;
}
tx_buf += tx_count;
rx_buf += rx_count;
tx_bytes -= tx_count;
rx_bytes -= rx_count;
cpu_relax();
}
if (rx_bytes > 0 || tx_bytes > 0) {
dev_err(spfi->dev, "PIO transfer timed out\n");
spfi_reset(spfi);
return -ETIMEDOUT;
}
if (tx_buf)
spfi_flush_tx_fifo(spfi);
spfi_stop(spfi);
return 0;
}
static void img_spfi_dma_rx_cb(void *data)
{
struct img_spfi *spfi = data;
unsigned long flags;
spin_lock_irqsave(&spfi->lock, flags);
spfi->rx_dma_busy = false;
if (!spfi->tx_dma_busy) {
spfi_stop(spfi);
spi_finalize_current_transfer(spfi->master);
}
spin_unlock_irqrestore(&spfi->lock, flags);
}
static void img_spfi_dma_tx_cb(void *data)
{
struct img_spfi *spfi = data;
unsigned long flags;
spfi_flush_tx_fifo(spfi);
spin_lock_irqsave(&spfi->lock, flags);
spfi->tx_dma_busy = false;
if (!spfi->rx_dma_busy) {
spfi_stop(spfi);
spi_finalize_current_transfer(spfi->master);
}
spin_unlock_irqrestore(&spfi->lock, flags);
}
static int img_spfi_start_dma(struct spi_master *master,
struct spi_device *spi,
struct spi_transfer *xfer)
{
struct img_spfi *spfi = spi_master_get_devdata(spi->master);
struct dma_async_tx_descriptor *rxdesc = NULL, *txdesc = NULL;
struct dma_slave_config rxconf, txconf;
spfi->rx_dma_busy = false;
spfi->tx_dma_busy = false;
if (xfer->rx_buf) {
rxconf.direction = DMA_DEV_TO_MEM;
switch (xfer->bits_per_word) {
case 32:
rxconf.src_addr = spfi->phys + SPFI_RX_32BIT_VALID_DATA;
rxconf.src_addr_width = 4;
rxconf.src_maxburst = 4;
break;
case 8:
default:
rxconf.src_addr = spfi->phys + SPFI_RX_8BIT_VALID_DATA;
rxconf.src_addr_width = 1;
rxconf.src_maxburst = 1;
}
dmaengine_slave_config(spfi->rx_ch, &rxconf);
rxdesc = dmaengine_prep_slave_sg(spfi->rx_ch, xfer->rx_sg.sgl,
xfer->rx_sg.nents,
DMA_DEV_TO_MEM,
DMA_PREP_INTERRUPT);
if (!rxdesc)
goto stop_dma;
rxdesc->callback = img_spfi_dma_rx_cb;
rxdesc->callback_param = spfi;
}
if (xfer->tx_buf) {
txconf.direction = DMA_MEM_TO_DEV;
switch (xfer->bits_per_word) {
case 32:
txconf.dst_addr = spfi->phys + SPFI_TX_32BIT_VALID_DATA;
txconf.dst_addr_width = 4;
txconf.dst_maxburst = 4;
break;
case 8:
default:
txconf.dst_addr = spfi->phys + SPFI_TX_8BIT_VALID_DATA;
txconf.dst_addr_width = 1;
txconf.dst_maxburst = 1;
break;
}
dmaengine_slave_config(spfi->tx_ch, &txconf);
txdesc = dmaengine_prep_slave_sg(spfi->tx_ch, xfer->tx_sg.sgl,
xfer->tx_sg.nents,
DMA_MEM_TO_DEV,
DMA_PREP_INTERRUPT);
if (!txdesc)
goto stop_dma;
txdesc->callback = img_spfi_dma_tx_cb;
txdesc->callback_param = spfi;
}
if (xfer->rx_buf) {
spfi->rx_dma_busy = true;
dmaengine_submit(rxdesc);
dma_async_issue_pending(spfi->rx_ch);
}
if (xfer->tx_buf) {
spfi->tx_dma_busy = true;
dmaengine_submit(txdesc);
dma_async_issue_pending(spfi->tx_ch);
}
spfi_start(spfi);
return 1;
stop_dma:
dmaengine_terminate_all(spfi->rx_ch);
dmaengine_terminate_all(spfi->tx_ch);
return -EIO;
}
static void img_spfi_config(struct spi_master *master, struct spi_device *spi,
struct spi_transfer *xfer)
{
struct img_spfi *spfi = spi_master_get_devdata(spi->master);
u32 val, div;
/*
* output = spfi_clk * (BITCLK / 512), where BITCLK must be a
* power of 2 up to 256 (where 255 == 256 since BITCLK is 8 bits)
*/
div = DIV_ROUND_UP(master->max_speed_hz, xfer->speed_hz);
div = clamp(512 / (1 << get_count_order(div)), 1, 255);
val = spfi_readl(spfi, SPFI_DEVICE_PARAMETER(spi->chip_select));
val &= ~(SPFI_DEVICE_PARAMETER_BITCLK_MASK <<
SPFI_DEVICE_PARAMETER_BITCLK_SHIFT);
val |= div << SPFI_DEVICE_PARAMETER_BITCLK_SHIFT;
spfi_writel(spfi, val, SPFI_DEVICE_PARAMETER(spi->chip_select));
val = spfi_readl(spfi, SPFI_CONTROL);
val &= ~(SPFI_CONTROL_SEND_DMA | SPFI_CONTROL_GET_DMA);
if (xfer->tx_buf)
val |= SPFI_CONTROL_SEND_DMA;
if (xfer->rx_buf)
val |= SPFI_CONTROL_GET_DMA;
val &= ~(SPFI_CONTROL_TMODE_MASK << SPFI_CONTROL_TMODE_SHIFT);
if (xfer->tx_nbits == SPI_NBITS_DUAL &&
xfer->rx_nbits == SPI_NBITS_DUAL)
val |= SPFI_CONTROL_TMODE_DUAL << SPFI_CONTROL_TMODE_SHIFT;
else if (xfer->tx_nbits == SPI_NBITS_QUAD &&
xfer->rx_nbits == SPI_NBITS_QUAD)
val |= SPFI_CONTROL_TMODE_QUAD << SPFI_CONTROL_TMODE_SHIFT;
val &= ~SPFI_CONTROL_CONTINUE;
if (!xfer->cs_change && !list_is_last(&xfer->transfer_list,
&master->cur_msg->transfers))
val |= SPFI_CONTROL_CONTINUE;
spfi_writel(spfi, val, SPFI_CONTROL);
val = spfi_readl(spfi, SPFI_PORT_STATE);
if (spi->mode & SPI_CPHA)
val |= SPFI_PORT_STATE_CK_PHASE(spi->chip_select);
else
val &= ~SPFI_PORT_STATE_CK_PHASE(spi->chip_select);
if (spi->mode & SPI_CPOL)
val |= SPFI_PORT_STATE_CK_POL(spi->chip_select);
else
val &= ~SPFI_PORT_STATE_CK_POL(spi->chip_select);
spfi_writel(spfi, val, SPFI_PORT_STATE);
spfi_writel(spfi, xfer->len << SPFI_TRANSACTION_TSIZE_SHIFT,
SPFI_TRANSACTION);
}
static int img_spfi_transfer_one(struct spi_master *master,
struct spi_device *spi,
struct spi_transfer *xfer)
{
struct img_spfi *spfi = spi_master_get_devdata(spi->master);
bool dma_reset = false;
unsigned long flags;
int ret;
/*
* Stop all DMA and reset the controller if the previous transaction
* timed-out and never completed it's DMA.
*/
spin_lock_irqsave(&spfi->lock, flags);
if (spfi->tx_dma_busy || spfi->rx_dma_busy) {
dev_err(spfi->dev, "SPI DMA still busy\n");
dma_reset = true;
}
spin_unlock_irqrestore(&spfi->lock, flags);
if (dma_reset) {
dmaengine_terminate_all(spfi->tx_ch);
dmaengine_terminate_all(spfi->rx_ch);
spfi_reset(spfi);
}
img_spfi_config(master, spi, xfer);
if (master->can_dma && master->can_dma(master, spi, xfer))
ret = img_spfi_start_dma(master, spi, xfer);
else
ret = img_spfi_start_pio(master, spi, xfer);
return ret;
}
static void img_spfi_set_cs(struct spi_device *spi, bool enable)
{
struct img_spfi *spfi = spi_master_get_devdata(spi->master);
u32 val;
val = spfi_readl(spfi, SPFI_PORT_STATE);
val &= ~(SPFI_PORT_STATE_DEV_SEL_MASK << SPFI_PORT_STATE_DEV_SEL_SHIFT);
val |= spi->chip_select << SPFI_PORT_STATE_DEV_SEL_SHIFT;
spfi_writel(spfi, val, SPFI_PORT_STATE);
}
static bool img_spfi_can_dma(struct spi_master *master, struct spi_device *spi,
struct spi_transfer *xfer)
{
if (xfer->bits_per_word == 8 && xfer->len > SPFI_8BIT_FIFO_SIZE)
return true;
if (xfer->bits_per_word == 32 && xfer->len > SPFI_32BIT_FIFO_SIZE)
return true;
return false;
}
static irqreturn_t img_spfi_irq(int irq, void *dev_id)
{
struct img_spfi *spfi = (struct img_spfi *)dev_id;
u32 status;
status = spfi_readl(spfi, SPFI_INTERRUPT_STATUS);
if (status & SPFI_INTERRUPT_IACCESS) {
spfi_writel(spfi, SPFI_INTERRUPT_IACCESS, SPFI_INTERRUPT_CLEAR);
dev_err(spfi->dev, "Illegal access interrupt");
return IRQ_HANDLED;
}
return IRQ_NONE;
}
static int img_spfi_probe(struct platform_device *pdev)
{
struct spi_master *master;
struct img_spfi *spfi;
struct resource *res;
int ret;
master = spi_alloc_master(&pdev->dev, sizeof(*spfi));
if (!master)
return -ENOMEM;
platform_set_drvdata(pdev, master);
spfi = spi_master_get_devdata(master);
spfi->dev = &pdev->dev;
spfi->master = master;
spin_lock_init(&spfi->lock);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
spfi->regs = devm_ioremap_resource(spfi->dev, res);
if (IS_ERR(spfi->regs)) {
ret = PTR_ERR(spfi->regs);
goto put_spi;
}
spfi->phys = res->start;
spfi->irq = platform_get_irq(pdev, 0);
if (spfi->irq < 0) {
ret = spfi->irq;
goto put_spi;
}
ret = devm_request_irq(spfi->dev, spfi->irq, img_spfi_irq,
IRQ_TYPE_LEVEL_HIGH, dev_name(spfi->dev), spfi);
if (ret)
goto put_spi;
spfi->sys_clk = devm_clk_get(spfi->dev, "sys");
if (IS_ERR(spfi->sys_clk)) {
ret = PTR_ERR(spfi->sys_clk);
goto put_spi;
}
spfi->spfi_clk = devm_clk_get(spfi->dev, "spfi");
if (IS_ERR(spfi->spfi_clk)) {
ret = PTR_ERR(spfi->spfi_clk);
goto put_spi;
}
ret = clk_prepare_enable(spfi->sys_clk);
if (ret)
goto put_spi;
ret = clk_prepare_enable(spfi->spfi_clk);
if (ret)
goto disable_pclk;
spfi_reset(spfi);
/*
* Only enable the error (IACCESS) interrupt. In PIO mode we'll
* poll the status of the FIFOs.
*/
spfi_writel(spfi, SPFI_INTERRUPT_IACCESS, SPFI_INTERRUPT_ENABLE);
master->auto_runtime_pm = true;
master->bus_num = pdev->id;
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_TX_DUAL | SPI_RX_DUAL;
if (of_property_read_bool(spfi->dev->of_node, "img,supports-quad-mode"))
master->mode_bits |= SPI_TX_QUAD | SPI_RX_QUAD;
master->num_chipselect = 5;
master->dev.of_node = pdev->dev.of_node;
master->bits_per_word_mask = SPI_BPW_MASK(32) | SPI_BPW_MASK(8);
master->max_speed_hz = clk_get_rate(spfi->spfi_clk);
master->min_speed_hz = master->max_speed_hz / 512;
master->set_cs = img_spfi_set_cs;
master->transfer_one = img_spfi_transfer_one;
spfi->tx_ch = dma_request_slave_channel(spfi->dev, "tx");
spfi->rx_ch = dma_request_slave_channel(spfi->dev, "rx");
if (!spfi->tx_ch || !spfi->rx_ch) {
if (spfi->tx_ch)
dma_release_channel(spfi->tx_ch);
if (spfi->rx_ch)
dma_release_channel(spfi->rx_ch);
dev_warn(spfi->dev, "Failed to get DMA channels, falling back to PIO mode\n");
} else {
master->dma_tx = spfi->tx_ch;
master->dma_rx = spfi->rx_ch;
master->can_dma = img_spfi_can_dma;
}
pm_runtime_set_active(spfi->dev);
pm_runtime_enable(spfi->dev);
ret = devm_spi_register_master(spfi->dev, master);
if (ret)
goto disable_pm;
return 0;
disable_pm:
pm_runtime_disable(spfi->dev);
if (spfi->rx_ch)
dma_release_channel(spfi->rx_ch);
if (spfi->tx_ch)
dma_release_channel(spfi->tx_ch);
clk_disable_unprepare(spfi->spfi_clk);
disable_pclk:
clk_disable_unprepare(spfi->sys_clk);
put_spi:
spi_master_put(master);
return ret;
}
static int img_spfi_remove(struct platform_device *pdev)
{
struct spi_master *master = platform_get_drvdata(pdev);
struct img_spfi *spfi = spi_master_get_devdata(master);
if (spfi->tx_ch)
dma_release_channel(spfi->tx_ch);
if (spfi->rx_ch)
dma_release_channel(spfi->rx_ch);
pm_runtime_disable(spfi->dev);
if (!pm_runtime_status_suspended(spfi->dev)) {
clk_disable_unprepare(spfi->spfi_clk);
clk_disable_unprepare(spfi->sys_clk);
}
spi_master_put(master);
return 0;
}
#ifdef CONFIG_PM_RUNTIME
static int img_spfi_runtime_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct img_spfi *spfi = spi_master_get_devdata(master);
clk_disable_unprepare(spfi->spfi_clk);
clk_disable_unprepare(spfi->sys_clk);
return 0;
}
static int img_spfi_runtime_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct img_spfi *spfi = spi_master_get_devdata(master);
int ret;
ret = clk_prepare_enable(spfi->sys_clk);
if (ret)
return ret;
ret = clk_prepare_enable(spfi->spfi_clk);
if (ret) {
clk_disable_unprepare(spfi->sys_clk);
return ret;
}
return 0;
}
#endif /* CONFIG_PM_RUNTIME */
#ifdef CONFIG_PM_SLEEP
static int img_spfi_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
return spi_master_suspend(master);
}
static int img_spfi_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct img_spfi *spfi = spi_master_get_devdata(master);
int ret;
ret = pm_runtime_get_sync(dev);
if (ret)
return ret;
spfi_reset(spfi);
pm_runtime_put(dev);
return spi_master_resume(master);
}
#endif /* CONFIG_PM_SLEEP */
static const struct dev_pm_ops img_spfi_pm_ops = {
SET_RUNTIME_PM_OPS(img_spfi_runtime_suspend, img_spfi_runtime_resume,
NULL)
SET_SYSTEM_SLEEP_PM_OPS(img_spfi_suspend, img_spfi_resume)
};
static const struct of_device_id img_spfi_of_match[] = {
{ .compatible = "img,spfi", },
{ },
};
MODULE_DEVICE_TABLE(of, img_spfi_of_match);
static struct platform_driver img_spfi_driver = {
.driver = {
.name = "img-spfi",
.pm = &img_spfi_pm_ops,
.of_match_table = of_match_ptr(img_spfi_of_match),
},
.probe = img_spfi_probe,
.remove = img_spfi_remove,
};
module_platform_driver(img_spfi_driver);
MODULE_DESCRIPTION("IMG SPFI controller driver");
MODULE_AUTHOR("Andrew Bresticker <abrestic@chromium.org>");
MODULE_LICENSE("GPL v2");
/*
* Driver for Amlogic Meson SPI flash controller (SPIFC)
*
* Copyright (C) 2014 Beniamino Galvani <b.galvani@gmail.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* version 2 as published by the Free Software Foundation.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/regmap.h>
#include <linux/spi/spi.h>
#include <linux/types.h>
/* register map */
#define REG_CMD 0x00
#define REG_ADDR 0x04
#define REG_CTRL 0x08
#define REG_CTRL1 0x0c
#define REG_STATUS 0x10
#define REG_CTRL2 0x14
#define REG_CLOCK 0x18
#define REG_USER 0x1c
#define REG_USER1 0x20
#define REG_USER2 0x24
#define REG_USER3 0x28
#define REG_USER4 0x2c
#define REG_SLAVE 0x30
#define REG_SLAVE1 0x34
#define REG_SLAVE2 0x38
#define REG_SLAVE3 0x3c
#define REG_C0 0x40
#define REG_B8 0x60
#define REG_MAX 0x7c
/* register fields */
#define CMD_USER BIT(18)
#define CTRL_ENABLE_AHB BIT(17)
#define CLOCK_SOURCE BIT(31)
#define CLOCK_DIV_SHIFT 12
#define CLOCK_DIV_MASK (0x3f << CLOCK_DIV_SHIFT)
#define CLOCK_CNT_HIGH_SHIFT 6
#define CLOCK_CNT_HIGH_MASK (0x3f << CLOCK_CNT_HIGH_SHIFT)
#define CLOCK_CNT_LOW_SHIFT 0
#define CLOCK_CNT_LOW_MASK (0x3f << CLOCK_CNT_LOW_SHIFT)
#define USER_DIN_EN_MS BIT(0)
#define USER_CMP_MODE BIT(2)
#define USER_UC_DOUT_SEL BIT(27)
#define USER_UC_DIN_SEL BIT(28)
#define USER_UC_MASK ((BIT(5) - 1) << 27)
#define USER1_BN_UC_DOUT_SHIFT 17
#define USER1_BN_UC_DOUT_MASK (0xff << 16)
#define USER1_BN_UC_DIN_SHIFT 8
#define USER1_BN_UC_DIN_MASK (0xff << 8)
#define USER4_CS_ACT BIT(30)
#define SLAVE_TRST_DONE BIT(4)
#define SLAVE_OP_MODE BIT(30)
#define SLAVE_SW_RST BIT(31)
#define SPIFC_BUFFER_SIZE 64
/**
* struct meson_spifc
* @master: the SPI master
* @regmap: regmap for device registers
* @clk: input clock of the built-in baud rate generator
* @device: the device structure
*/
struct meson_spifc {
struct spi_master *master;
struct regmap *regmap;
struct clk *clk;
struct device *dev;
};
static struct regmap_config spifc_regmap_config = {
.reg_bits = 32,
.val_bits = 32,
.reg_stride = 4,
.max_register = REG_MAX,
};
/**
* meson_spifc_wait_ready() - wait for the current operation to terminate
* @spifc: the Meson SPI device
* Return: 0 on success, a negative value on error
*/
static int meson_spifc_wait_ready(struct meson_spifc *spifc)
{
unsigned long deadline = jiffies + msecs_to_jiffies(5);
u32 data;
do {
regmap_read(spifc->regmap, REG_SLAVE, &data);
if (data & SLAVE_TRST_DONE)
return 0;
cond_resched();
} while (!time_after(jiffies, deadline));
return -ETIMEDOUT;
}
/**
* meson_spifc_drain_buffer() - copy data from device buffer to memory
* @spifc: the Meson SPI device
* @buf: the destination buffer
* @len: number of bytes to copy
*/
static void meson_spifc_drain_buffer(struct meson_spifc *spifc, u8 *buf,
int len)
{
u32 data;
int i = 0;
while (i < len) {
regmap_read(spifc->regmap, REG_C0 + i, &data);
if (len - i >= 4) {
*((u32 *)buf) = data;
buf += 4;
} else {
memcpy(buf, &data, len - i);
break;
}
i += 4;
}
}
/**
* meson_spifc_fill_buffer() - copy data from memory to device buffer
* @spifc: the Meson SPI device
* @buf: the source buffer
* @len: number of bytes to copy
*/
static void meson_spifc_fill_buffer(struct meson_spifc *spifc, const u8 *buf,
int len)
{
u32 data;
int i = 0;
while (i < len) {
if (len - i >= 4)
data = *(u32 *)buf;
else
memcpy(&data, buf, len - i);
regmap_write(spifc->regmap, REG_C0 + i, data);
buf += 4;
i += 4;
}
}
/**
* meson_spifc_setup_speed() - program the clock divider
* @spifc: the Meson SPI device
* @speed: desired speed in Hz
*/
static void meson_spifc_setup_speed(struct meson_spifc *spifc, u32 speed)
{
unsigned long parent, value;
int n;
parent = clk_get_rate(spifc->clk);
n = max_t(int, parent / speed - 1, 1);
dev_dbg(spifc->dev, "parent %lu, speed %u, n %d\n", parent,
speed, n);
value = (n << CLOCK_DIV_SHIFT) & CLOCK_DIV_MASK;
value |= (n << CLOCK_CNT_LOW_SHIFT) & CLOCK_CNT_LOW_MASK;
value |= (((n + 1) / 2 - 1) << CLOCK_CNT_HIGH_SHIFT) &
CLOCK_CNT_HIGH_MASK;
regmap_write(spifc->regmap, REG_CLOCK, value);
}
/**
* meson_spifc_txrx() - transfer a chunk of data
* @spifc: the Meson SPI device
* @xfer: the current SPI transfer
* @offset: offset of the data to transfer
* @len: length of the data to transfer
* @last_xfer: whether this is the last transfer of the message
* @last_chunk: whether this is the last chunk of the transfer
* Return: 0 on success, a negative value on error
*/
static int meson_spifc_txrx(struct meson_spifc *spifc,
struct spi_transfer *xfer,
int offset, int len, bool last_xfer,
bool last_chunk)
{
bool keep_cs = true;
int ret;
if (xfer->tx_buf)
meson_spifc_fill_buffer(spifc, xfer->tx_buf + offset, len);
/* enable DOUT stage */
regmap_update_bits(spifc->regmap, REG_USER, USER_UC_MASK,
USER_UC_DOUT_SEL);
regmap_write(spifc->regmap, REG_USER1,
(8 * len - 1) << USER1_BN_UC_DOUT_SHIFT);
/* enable data input during DOUT */
regmap_update_bits(spifc->regmap, REG_USER, USER_DIN_EN_MS,
USER_DIN_EN_MS);
if (last_chunk) {
if (last_xfer)
keep_cs = xfer->cs_change;
else
keep_cs = !xfer->cs_change;
}
regmap_update_bits(spifc->regmap, REG_USER4, USER4_CS_ACT,
keep_cs ? USER4_CS_ACT : 0);
/* clear transition done bit */
regmap_update_bits(spifc->regmap, REG_SLAVE, SLAVE_TRST_DONE, 0);
/* start transfer */
regmap_update_bits(spifc->regmap, REG_CMD, CMD_USER, CMD_USER);
ret = meson_spifc_wait_ready(spifc);
if (!ret && xfer->rx_buf)
meson_spifc_drain_buffer(spifc, xfer->rx_buf + offset, len);
return ret;
}
/**
* meson_spifc_transfer_one() - perform a single transfer
* @master: the SPI master
* @spi: the SPI device
* @xfer: the current SPI transfer
* Return: 0 on success, a negative value on error
*/
static int meson_spifc_transfer_one(struct spi_master *master,
struct spi_device *spi,
struct spi_transfer *xfer)
{
struct meson_spifc *spifc = spi_master_get_devdata(master);
int len, done = 0, ret = 0;
meson_spifc_setup_speed(spifc, xfer->speed_hz);
regmap_update_bits(spifc->regmap, REG_CTRL, CTRL_ENABLE_AHB, 0);
while (done < xfer->len && !ret) {
len = min_t(int, xfer->len - done, SPIFC_BUFFER_SIZE);
ret = meson_spifc_txrx(spifc, xfer, done, len,
spi_transfer_is_last(master, xfer),
done + len >= xfer->len);
done += len;
}
regmap_update_bits(spifc->regmap, REG_CTRL, CTRL_ENABLE_AHB,
CTRL_ENABLE_AHB);
return ret;
}
/**
* meson_spifc_hw_init() - reset and initialize the SPI controller
* @spifc: the Meson SPI device
*/
static void meson_spifc_hw_init(struct meson_spifc *spifc)
{
/* reset device */
regmap_update_bits(spifc->regmap, REG_SLAVE, SLAVE_SW_RST,
SLAVE_SW_RST);
/* disable compatible mode */
regmap_update_bits(spifc->regmap, REG_USER, USER_CMP_MODE, 0);
/* set master mode */
regmap_update_bits(spifc->regmap, REG_SLAVE, SLAVE_OP_MODE, 0);
}
static int meson_spifc_probe(struct platform_device *pdev)
{
struct spi_master *master;
struct meson_spifc *spifc;
struct resource *res;
void __iomem *base;
unsigned int rate;
int ret = 0;
master = spi_alloc_master(&pdev->dev, sizeof(struct meson_spifc));
if (!master)
return -ENOMEM;
platform_set_drvdata(pdev, master);
spifc = spi_master_get_devdata(master);
spifc->dev = &pdev->dev;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
base = devm_ioremap_resource(spifc->dev, res);
if (IS_ERR(base)) {
ret = PTR_ERR(base);
goto out_err;
}
spifc->regmap = devm_regmap_init_mmio(spifc->dev, base,
&spifc_regmap_config);
if (IS_ERR(spifc->regmap)) {
ret = PTR_ERR(spifc->regmap);
goto out_err;
}
spifc->clk = devm_clk_get(spifc->dev, NULL);
if (IS_ERR(spifc->clk)) {
dev_err(spifc->dev, "missing clock\n");
ret = PTR_ERR(spifc->clk);
goto out_err;
}
ret = clk_prepare_enable(spifc->clk);
if (ret) {
dev_err(spifc->dev, "can't prepare clock\n");
goto out_err;
}
rate = clk_get_rate(spifc->clk);
master->num_chipselect = 1;
master->dev.of_node = pdev->dev.of_node;
master->bits_per_word_mask = SPI_BPW_MASK(8);
master->auto_runtime_pm = true;
master->transfer_one = meson_spifc_transfer_one;
master->min_speed_hz = rate >> 6;
master->max_speed_hz = rate >> 1;
meson_spifc_hw_init(spifc);
pm_runtime_set_active(spifc->dev);
pm_runtime_enable(spifc->dev);
ret = devm_spi_register_master(spifc->dev, master);
if (ret) {
dev_err(spifc->dev, "failed to register spi master\n");
goto out_clk;
}
return 0;
out_clk:
clk_disable_unprepare(spifc->clk);
out_err:
spi_master_put(master);
return ret;
}
static int meson_spifc_remove(struct platform_device *pdev)
{
struct spi_master *master = platform_get_drvdata(pdev);
struct meson_spifc *spifc = spi_master_get_devdata(master);
pm_runtime_get_sync(&pdev->dev);
clk_disable_unprepare(spifc->clk);
pm_runtime_disable(&pdev->dev);
return 0;
}
#ifdef CONFIG_PM_SLEEP
static int meson_spifc_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct meson_spifc *spifc = spi_master_get_devdata(master);
int ret;
ret = spi_master_suspend(master);
if (ret)
return ret;
if (!pm_runtime_suspended(dev))
clk_disable_unprepare(spifc->clk);
return 0;
}
static int meson_spifc_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct meson_spifc *spifc = spi_master_get_devdata(master);
int ret;
if (!pm_runtime_suspended(dev)) {
ret = clk_prepare_enable(spifc->clk);
if (ret)
return ret;
}
meson_spifc_hw_init(spifc);
ret = spi_master_resume(master);
if (ret)
clk_disable_unprepare(spifc->clk);
return ret;
}
#endif /* CONFIG_PM_SLEEP */
#ifdef CONFIG_PM_RUNTIME
static int meson_spifc_runtime_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct meson_spifc *spifc = spi_master_get_devdata(master);
clk_disable_unprepare(spifc->clk);
return 0;
}
static int meson_spifc_runtime_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct meson_spifc *spifc = spi_master_get_devdata(master);
return clk_prepare_enable(spifc->clk);
}
#endif /* CONFIG_PM_RUNTIME */
static const struct dev_pm_ops meson_spifc_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(meson_spifc_suspend, meson_spifc_resume)
SET_RUNTIME_PM_OPS(meson_spifc_runtime_suspend,
meson_spifc_runtime_resume,
NULL)
};
static const struct of_device_id meson_spifc_dt_match[] = {
{ .compatible = "amlogic,meson6-spifc", },
{ },
};
static struct platform_driver meson_spifc_driver = {
.probe = meson_spifc_probe,
.remove = meson_spifc_remove,
.driver = {
.name = "meson-spifc",
.of_match_table = of_match_ptr(meson_spifc_dt_match),
.pm = &meson_spifc_pm_ops,
},
};
module_platform_driver(meson_spifc_driver);
MODULE_AUTHOR("Beniamino Galvani <b.galvani@gmail.com>");
MODULE_DESCRIPTION("Amlogic Meson SPIFC driver");
MODULE_LICENSE("GPL v2");
......@@ -182,7 +182,6 @@ static int mxs_spi_txrx_dma(struct mxs_spi *spi,
int min, ret;
u32 ctrl0;
struct page *vm_page;
void *sg_buf;
struct {
u32 pio[4];
struct scatterlist sg;
......@@ -232,13 +231,14 @@ static int mxs_spi_txrx_dma(struct mxs_spi *spi,
ret = -ENOMEM;
goto err_vmalloc;
}
sg_buf = page_address(vm_page) +
((size_t)buf & ~PAGE_MASK);
sg_init_table(&dma_xfer[sg_count].sg, 1);
sg_set_page(&dma_xfer[sg_count].sg, vm_page,
min, offset_in_page(buf));
} else {
sg_buf = buf;
sg_init_one(&dma_xfer[sg_count].sg, buf, min);
}
sg_init_one(&dma_xfer[sg_count].sg, sg_buf, min);
ret = dma_map_sg(ssp->dev, &dma_xfer[sg_count].sg, 1,
(flags & TXRX_WRITE) ? DMA_TO_DEVICE : DMA_FROM_DEVICE);
......@@ -511,7 +511,7 @@ static int mxs_spi_probe(struct platform_device *pdev)
init_completion(&spi->c);
ret = devm_request_irq(&pdev->dev, irq_err, mxs_ssp_irq_handler, 0,
DRIVER_NAME, ssp);
dev_name(&pdev->dev), ssp);
if (ret)
goto out_master_free;
......
......@@ -19,6 +19,7 @@ enum {
PORT_BSW0,
PORT_BSW1,
PORT_BSW2,
PORT_QUARK_X1000,
};
struct pxa_spi_info {
......@@ -92,6 +93,12 @@ static struct pxa_spi_info spi_info_configs[] = {
.tx_param = &bsw2_tx_param,
.rx_param = &bsw2_rx_param,
},
[PORT_QUARK_X1000] = {
.type = QUARK_X1000_SSP,
.port_id = -1,
.num_chipselect = 1,
.max_clk_rate = 50000000,
},
};
static int pxa2xx_spi_pci_probe(struct pci_dev *dev,
......@@ -191,6 +198,7 @@ static void pxa2xx_spi_pci_remove(struct pci_dev *dev)
static const struct pci_device_id pxa2xx_spi_pci_devices[] = {
{ PCI_VDEVICE(INTEL, 0x2e6a), PORT_CE4100 },
{ PCI_VDEVICE(INTEL, 0x0935), PORT_QUARK_X1000 },
{ PCI_VDEVICE(INTEL, 0x0f0e), PORT_BYT },
{ PCI_VDEVICE(INTEL, 0x228e), PORT_BSW0 },
{ PCI_VDEVICE(INTEL, 0x2290), PORT_BSW1 },
......
......@@ -63,10 +63,64 @@ MODULE_ALIAS("platform:pxa2xx-spi");
| SSCR1_RFT | SSCR1_TFT | SSCR1_MWDS \
| SSCR1_SPH | SSCR1_SPO | SSCR1_LBM)
#define QUARK_X1000_SSCR1_CHANGE_MASK (QUARK_X1000_SSCR1_STRF \
| QUARK_X1000_SSCR1_EFWR \
| QUARK_X1000_SSCR1_RFT \
| QUARK_X1000_SSCR1_TFT \
| SSCR1_SPH | SSCR1_SPO | SSCR1_LBM)
#define LPSS_RX_THRESH_DFLT 64
#define LPSS_TX_LOTHRESH_DFLT 160
#define LPSS_TX_HITHRESH_DFLT 224
struct quark_spi_rate {
u32 bitrate;
u32 dds_clk_rate;
u32 clk_div;
};
/*
* 'rate', 'dds', 'clk_div' lookup table, which is defined in
* the Quark SPI datasheet.
*/
static const struct quark_spi_rate quark_spi_rate_table[] = {
/* bitrate, dds_clk_rate, clk_div */
{50000000, 0x800000, 0},
{40000000, 0x666666, 0},
{25000000, 0x400000, 0},
{20000000, 0x666666, 1},
{16667000, 0x800000, 2},
{13333000, 0x666666, 2},
{12500000, 0x200000, 0},
{10000000, 0x800000, 4},
{8000000, 0x666666, 4},
{6250000, 0x400000, 3},
{5000000, 0x400000, 4},
{4000000, 0x666666, 9},
{3125000, 0x80000, 0},
{2500000, 0x400000, 9},
{2000000, 0x666666, 19},
{1563000, 0x40000, 0},
{1250000, 0x200000, 9},
{1000000, 0x400000, 24},
{800000, 0x666666, 49},
{781250, 0x20000, 0},
{625000, 0x200000, 19},
{500000, 0x400000, 49},
{400000, 0x666666, 99},
{390625, 0x10000, 0},
{250000, 0x400000, 99},
{200000, 0x666666, 199},
{195313, 0x8000, 0},
{125000, 0x100000, 49},
{100000, 0x200000, 124},
{50000, 0x100000, 124},
{25000, 0x80000, 124},
{10016, 0x20000, 77},
{5040, 0x20000, 154},
{1002, 0x8000, 194},
};
/* Offset from drv_data->lpss_base */
#define GENERAL_REG 0x08
#define GENERAL_REG_RXTO_HOLDOFF_DISABLE BIT(24)
......@@ -80,6 +134,96 @@ static bool is_lpss_ssp(const struct driver_data *drv_data)
return drv_data->ssp_type == LPSS_SSP;
}
static bool is_quark_x1000_ssp(const struct driver_data *drv_data)
{
return drv_data->ssp_type == QUARK_X1000_SSP;
}
static u32 pxa2xx_spi_get_ssrc1_change_mask(const struct driver_data *drv_data)
{
switch (drv_data->ssp_type) {
case QUARK_X1000_SSP:
return QUARK_X1000_SSCR1_CHANGE_MASK;
default:
return SSCR1_CHANGE_MASK;
}
}
static u32
pxa2xx_spi_get_rx_default_thre(const struct driver_data *drv_data)
{
switch (drv_data->ssp_type) {
case QUARK_X1000_SSP:
return RX_THRESH_QUARK_X1000_DFLT;
default:
return RX_THRESH_DFLT;
}
}
static bool pxa2xx_spi_txfifo_full(const struct driver_data *drv_data)
{
void __iomem *reg = drv_data->ioaddr;
u32 mask;
switch (drv_data->ssp_type) {
case QUARK_X1000_SSP:
mask = QUARK_X1000_SSSR_TFL_MASK;
break;
default:
mask = SSSR_TFL_MASK;
break;
}
return (read_SSSR(reg) & mask) == mask;
}
static void pxa2xx_spi_clear_rx_thre(const struct driver_data *drv_data,
u32 *sccr1_reg)
{
u32 mask;
switch (drv_data->ssp_type) {
case QUARK_X1000_SSP:
mask = QUARK_X1000_SSCR1_RFT;
break;
default:
mask = SSCR1_RFT;
break;
}
*sccr1_reg &= ~mask;
}
static void pxa2xx_spi_set_rx_thre(const struct driver_data *drv_data,
u32 *sccr1_reg, u32 threshold)
{
switch (drv_data->ssp_type) {
case QUARK_X1000_SSP:
*sccr1_reg |= QUARK_X1000_SSCR1_RxTresh(threshold);
break;
default:
*sccr1_reg |= SSCR1_RxTresh(threshold);
break;
}
}
static u32 pxa2xx_configure_sscr0(const struct driver_data *drv_data,
u32 clk_div, u8 bits)
{
switch (drv_data->ssp_type) {
case QUARK_X1000_SSP:
return clk_div
| QUARK_X1000_SSCR0_Motorola
| QUARK_X1000_SSCR0_DataSize(bits > 32 ? 8 : bits)
| SSCR0_SSE;
default:
return clk_div
| SSCR0_Motorola
| SSCR0_DataSize(bits > 16 ? bits - 16 : bits)
| SSCR0_SSE
| (bits > 16 ? SSCR0_EDSS : 0);
}
}
/*
* Read and write LPSS SSP private registers. Caller must first check that
* is_lpss_ssp() returns true before these can be called.
......@@ -234,7 +378,7 @@ static int null_writer(struct driver_data *drv_data)
void __iomem *reg = drv_data->ioaddr;
u8 n_bytes = drv_data->n_bytes;
if (((read_SSSR(reg) & SSSR_TFL_MASK) == SSSR_TFL_MASK)
if (pxa2xx_spi_txfifo_full(drv_data)
|| (drv_data->tx == drv_data->tx_end))
return 0;
......@@ -262,7 +406,7 @@ static int u8_writer(struct driver_data *drv_data)
{
void __iomem *reg = drv_data->ioaddr;
if (((read_SSSR(reg) & SSSR_TFL_MASK) == SSSR_TFL_MASK)
if (pxa2xx_spi_txfifo_full(drv_data)
|| (drv_data->tx == drv_data->tx_end))
return 0;
......@@ -289,7 +433,7 @@ static int u16_writer(struct driver_data *drv_data)
{
void __iomem *reg = drv_data->ioaddr;
if (((read_SSSR(reg) & SSSR_TFL_MASK) == SSSR_TFL_MASK)
if (pxa2xx_spi_txfifo_full(drv_data)
|| (drv_data->tx == drv_data->tx_end))
return 0;
......@@ -316,7 +460,7 @@ static int u32_writer(struct driver_data *drv_data)
{
void __iomem *reg = drv_data->ioaddr;
if (((read_SSSR(reg) & SSSR_TFL_MASK) == SSSR_TFL_MASK)
if (pxa2xx_spi_txfifo_full(drv_data)
|| (drv_data->tx == drv_data->tx_end))
return 0;
......@@ -508,8 +652,9 @@ static irqreturn_t interrupt_transfer(struct driver_data *drv_data)
* remaining RX bytes.
*/
if (pxa25x_ssp_comp(drv_data)) {
u32 rx_thre;
sccr1_reg &= ~SSCR1_RFT;
pxa2xx_spi_clear_rx_thre(drv_data, &sccr1_reg);
bytes_left = drv_data->rx_end - drv_data->rx;
switch (drv_data->n_bytes) {
......@@ -519,10 +664,11 @@ static irqreturn_t interrupt_transfer(struct driver_data *drv_data)
bytes_left >>= 1;
}
if (bytes_left > RX_THRESH_DFLT)
bytes_left = RX_THRESH_DFLT;
rx_thre = pxa2xx_spi_get_rx_default_thre(drv_data);
if (rx_thre > bytes_left)
rx_thre = bytes_left;
sccr1_reg |= SSCR1_RxTresh(bytes_left);
pxa2xx_spi_set_rx_thre(drv_data, &sccr1_reg, rx_thre);
}
write_SSCR1(sccr1_reg, reg);
}
......@@ -585,6 +731,28 @@ static irqreturn_t ssp_int(int irq, void *dev_id)
return drv_data->transfer_handler(drv_data);
}
/*
* The Quark SPI data sheet gives a table, and for the given 'rate',
* the 'dds' and 'clk_div' can be found in the table.
*/
static u32 quark_x1000_set_clk_regvals(u32 rate, u32 *dds, u32 *clk_div)
{
unsigned int i;
for (i = 0; i < ARRAY_SIZE(quark_spi_rate_table); i++) {
if (rate >= quark_spi_rate_table[i].bitrate) {
*dds = quark_spi_rate_table[i].dds_clk_rate;
*clk_div = quark_spi_rate_table[i].clk_div;
return quark_spi_rate_table[i].bitrate;
}
}
*dds = quark_spi_rate_table[i-1].dds_clk_rate;
*clk_div = quark_spi_rate_table[i-1].clk_div;
return quark_spi_rate_table[i-1].bitrate;
}
static unsigned int ssp_get_clk_div(struct driver_data *drv_data, int rate)
{
unsigned long ssp_clk = drv_data->max_clk_rate;
......@@ -598,6 +766,20 @@ static unsigned int ssp_get_clk_div(struct driver_data *drv_data, int rate)
return ((ssp_clk / rate - 1) & 0xfff) << 8;
}
static unsigned int pxa2xx_ssp_get_clk_div(struct driver_data *drv_data,
struct chip_data *chip, int rate)
{
u32 clk_div;
switch (drv_data->ssp_type) {
case QUARK_X1000_SSP:
quark_x1000_set_clk_regvals(rate, &chip->dds_rate, &clk_div);
return clk_div << 8;
default:
return ssp_get_clk_div(drv_data, rate);
}
}
static void pump_transfers(unsigned long data)
{
struct driver_data *drv_data = (struct driver_data *)data;
......@@ -613,6 +795,7 @@ static void pump_transfers(unsigned long data)
u32 cr1;
u32 dma_thresh = drv_data->cur_chip->dma_threshold;
u32 dma_burst = drv_data->cur_chip->dma_burst_size;
u32 change_mask = pxa2xx_spi_get_ssrc1_change_mask(drv_data);
/* Get current state information */
message = drv_data->cur_msg;
......@@ -699,7 +882,7 @@ static void pump_transfers(unsigned long data)
if (transfer->bits_per_word)
bits = transfer->bits_per_word;
clk_div = ssp_get_clk_div(drv_data, speed);
clk_div = pxa2xx_ssp_get_clk_div(drv_data, chip, speed);
if (bits <= 8) {
drv_data->n_bytes = 1;
......@@ -731,11 +914,7 @@ static void pump_transfers(unsigned long data)
"pump_transfers: DMA burst size reduced to match bits_per_word\n");
}
cr0 = clk_div
| SSCR0_Motorola
| SSCR0_DataSize(bits > 16 ? bits - 16 : bits)
| SSCR0_SSE
| (bits > 16 ? SSCR0_EDSS : 0);
cr0 = pxa2xx_configure_sscr0(drv_data, clk_div, bits);
}
message->state = RUNNING_STATE;
......@@ -771,17 +950,20 @@ static void pump_transfers(unsigned long data)
write_SSITF(chip->lpss_tx_threshold, reg);
}
if (is_quark_x1000_ssp(drv_data) &&
(read_DDS_RATE(reg) != chip->dds_rate))
write_DDS_RATE(chip->dds_rate, reg);
/* see if we need to reload the config registers */
if ((read_SSCR0(reg) != cr0)
|| (read_SSCR1(reg) & SSCR1_CHANGE_MASK) !=
(cr1 & SSCR1_CHANGE_MASK)) {
if ((read_SSCR0(reg) != cr0) ||
(read_SSCR1(reg) & change_mask) != (cr1 & change_mask)) {
/* stop the SSP, and update the other bits */
write_SSCR0(cr0 & ~SSCR0_SSE, reg);
if (!pxa25x_ssp_comp(drv_data))
write_SSTO(chip->timeout, reg);
/* first set CR1 without interrupt and service enables */
write_SSCR1(cr1 & SSCR1_CHANGE_MASK, reg);
write_SSCR1(cr1 & change_mask, reg);
/* restart the SSP */
write_SSCR0(cr0, reg);
......@@ -875,14 +1057,22 @@ static int setup(struct spi_device *spi)
unsigned int clk_div;
uint tx_thres, tx_hi_thres, rx_thres;
if (is_lpss_ssp(drv_data)) {
switch (drv_data->ssp_type) {
case QUARK_X1000_SSP:
tx_thres = TX_THRESH_QUARK_X1000_DFLT;
tx_hi_thres = 0;
rx_thres = RX_THRESH_QUARK_X1000_DFLT;
break;
case LPSS_SSP:
tx_thres = LPSS_TX_LOTHRESH_DFLT;
tx_hi_thres = LPSS_TX_HITHRESH_DFLT;
rx_thres = LPSS_RX_THRESH_DFLT;
} else {
break;
default:
tx_thres = TX_THRESH_DFLT;
tx_hi_thres = 0;
rx_thres = RX_THRESH_DFLT;
break;
}
/* Only alloc on first setup */
......@@ -935,9 +1125,6 @@ static int setup(struct spi_device *spi)
chip->enable_dma = drv_data->master_info->enable_dma;
}
chip->threshold = (SSCR1_RxTresh(rx_thres) & SSCR1_RFT) |
(SSCR1_TxTresh(tx_thres) & SSCR1_TFT);
chip->lpss_rx_threshold = SSIRF_RxThresh(rx_thres);
chip->lpss_tx_threshold = SSITF_TxLoThresh(tx_thres)
| SSITF_TxHiThresh(tx_hi_thres);
......@@ -956,15 +1143,24 @@ static int setup(struct spi_device *spi)
}
}
clk_div = ssp_get_clk_div(drv_data, spi->max_speed_hz);
clk_div = pxa2xx_ssp_get_clk_div(drv_data, chip, spi->max_speed_hz);
chip->speed_hz = spi->max_speed_hz;
chip->cr0 = clk_div
| SSCR0_Motorola
| SSCR0_DataSize(spi->bits_per_word > 16 ?
spi->bits_per_word - 16 : spi->bits_per_word)
| SSCR0_SSE
| (spi->bits_per_word > 16 ? SSCR0_EDSS : 0);
chip->cr0 = pxa2xx_configure_sscr0(drv_data, clk_div,
spi->bits_per_word);
switch (drv_data->ssp_type) {
case QUARK_X1000_SSP:
chip->threshold = (QUARK_X1000_SSCR1_RxTresh(rx_thres)
& QUARK_X1000_SSCR1_RFT)
| (QUARK_X1000_SSCR1_TxTresh(tx_thres)
& QUARK_X1000_SSCR1_TFT);
break;
default:
chip->threshold = (SSCR1_RxTresh(rx_thres) & SSCR1_RFT) |
(SSCR1_TxTresh(tx_thres) & SSCR1_TFT);
break;
}
chip->cr1 &= ~(SSCR1_SPO | SSCR1_SPH);
chip->cr1 |= (((spi->mode & SPI_CPHA) != 0) ? SSCR1_SPH : 0)
| (((spi->mode & SPI_CPOL) != 0) ? SSCR1_SPO : 0);
......@@ -993,7 +1189,8 @@ static int setup(struct spi_device *spi)
chip->read = u16_reader;
chip->write = u16_writer;
} else if (spi->bits_per_word <= 32) {
chip->cr0 |= SSCR0_EDSS;
if (!is_quark_x1000_ssp(drv_data))
chip->cr0 |= SSCR0_EDSS;
chip->n_bytes = 4;
chip->read = u32_reader;
chip->write = u32_writer;
......@@ -1144,7 +1341,15 @@ static int pxa2xx_spi_probe(struct platform_device *pdev)
drv_data->ioaddr = ssp->mmio_base;
drv_data->ssdr_physical = ssp->phys_base + SSDR;
if (pxa25x_ssp_comp(drv_data)) {
master->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 16);
switch (drv_data->ssp_type) {
case QUARK_X1000_SSP:
master->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32);
break;
default:
master->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 16);
break;
}
drv_data->int_cr1 = SSCR1_TIE | SSCR1_RIE;
drv_data->dma_cr1 = 0;
drv_data->clear_sr = SSSR_ROR;
......@@ -1182,16 +1387,35 @@ static int pxa2xx_spi_probe(struct platform_device *pdev)
/* Load default SSP configuration */
write_SSCR0(0, drv_data->ioaddr);
write_SSCR1(SSCR1_RxTresh(RX_THRESH_DFLT) |
SSCR1_TxTresh(TX_THRESH_DFLT),
drv_data->ioaddr);
write_SSCR0(SSCR0_SCR(2)
| SSCR0_Motorola
| SSCR0_DataSize(8),
drv_data->ioaddr);
switch (drv_data->ssp_type) {
case QUARK_X1000_SSP:
write_SSCR1(QUARK_X1000_SSCR1_RxTresh(
RX_THRESH_QUARK_X1000_DFLT) |
QUARK_X1000_SSCR1_TxTresh(
TX_THRESH_QUARK_X1000_DFLT),
drv_data->ioaddr);
/* using the Motorola SPI protocol and use 8 bit frame */
write_SSCR0(QUARK_X1000_SSCR0_Motorola
| QUARK_X1000_SSCR0_DataSize(8),
drv_data->ioaddr);
break;
default:
write_SSCR1(SSCR1_RxTresh(RX_THRESH_DFLT) |
SSCR1_TxTresh(TX_THRESH_DFLT),
drv_data->ioaddr);
write_SSCR0(SSCR0_SCR(2)
| SSCR0_Motorola
| SSCR0_DataSize(8),
drv_data->ioaddr);
break;
}
if (!pxa25x_ssp_comp(drv_data))
write_SSTO(0, drv_data->ioaddr);
write_SSPSP(0, drv_data->ioaddr);
if (!is_quark_x1000_ssp(drv_data))
write_SSPSP(0, drv_data->ioaddr);
lpss_ssp_setup(drv_data);
......
......@@ -93,6 +93,7 @@ struct driver_data {
struct chip_data {
u32 cr0;
u32 cr1;
u32 dds_rate;
u32 psp;
u32 timeout;
u8 n_bytes;
......@@ -126,6 +127,7 @@ DEFINE_SSP_REG(SSCR1, 0x04)
DEFINE_SSP_REG(SSSR, 0x08)
DEFINE_SSP_REG(SSITR, 0x0c)
DEFINE_SSP_REG(SSDR, 0x10)
DEFINE_SSP_REG(DDS_RATE, 0x28) /* DDS Clock Rate */
DEFINE_SSP_REG(SSTO, 0x28)
DEFINE_SSP_REG(SSPSP, 0x2c)
DEFINE_SSP_REG(SSITF, SSITF)
......@@ -141,18 +143,22 @@ DEFINE_SSP_REG(SSIRF, SSIRF)
static inline int pxa25x_ssp_comp(struct driver_data *drv_data)
{
if (drv_data->ssp_type == PXA25x_SSP)
switch (drv_data->ssp_type) {
case PXA25x_SSP:
case CE4100_SSP:
case QUARK_X1000_SSP:
return 1;
if (drv_data->ssp_type == CE4100_SSP)
return 1;
return 0;
default:
return 0;
}
}
static inline void write_SSSR_CS(struct driver_data *drv_data, u32 val)
{
void __iomem *reg = drv_data->ioaddr;
if (drv_data->ssp_type == CE4100_SSP)
if (drv_data->ssp_type == CE4100_SSP ||
drv_data->ssp_type == QUARK_X1000_SSP)
val |= read_SSSR(reg) & SSSR_ALT_FRM_MASK;
write_SSSR(val, reg);
......
......@@ -749,8 +749,6 @@ static int rockchip_spi_remove(struct platform_device *pdev)
if (rs->dma_rx.ch)
dma_release_channel(rs->dma_rx.ch);
spi_master_put(master);
return 0;
}
......
......@@ -33,8 +33,9 @@
#include <linux/platform_data/spi-s3c64xx.h>
#define MAX_SPI_PORTS 3
#define MAX_SPI_PORTS 6
#define S3C64XX_SPI_QUIRK_POLL (1 << 0)
#define S3C64XX_SPI_QUIRK_CS_AUTO (1 << 1)
/* Registers and bit-fields */
......@@ -78,6 +79,7 @@
#define S3C64XX_SPI_SLAVE_AUTO (1<<1)
#define S3C64XX_SPI_SLAVE_SIG_INACT (1<<0)
#define S3C64XX_SPI_SLAVE_NSC_CNT_2 (2<<4)
#define S3C64XX_SPI_INT_TRAILING_EN (1<<6)
#define S3C64XX_SPI_INT_RX_OVERRUN_EN (1<<5)
......@@ -344,16 +346,8 @@ static int s3c64xx_spi_prepare_transfer(struct spi_master *spi)
spi->dma_tx = sdd->tx_dma.ch;
}
ret = pm_runtime_get_sync(&sdd->pdev->dev);
if (ret < 0) {
dev_err(dev, "Failed to enable device: %d\n", ret);
goto out_tx;
}
return 0;
out_tx:
dma_release_channel(sdd->tx_dma.ch);
out_rx:
dma_release_channel(sdd->rx_dma.ch);
out:
......@@ -370,7 +364,6 @@ static int s3c64xx_spi_unprepare_transfer(struct spi_master *spi)
dma_release_channel(sdd->tx_dma.ch);
}
pm_runtime_put(&sdd->pdev->dev);
return 0;
}
......@@ -717,7 +710,12 @@ static int s3c64xx_spi_transfer_one(struct spi_master *master,
enable_datapath(sdd, spi, xfer, use_dma);
/* Start the signals */
writel(0, sdd->regs + S3C64XX_SPI_SLAVE_SEL);
if (!(sdd->port_conf->quirks & S3C64XX_SPI_QUIRK_CS_AUTO))
writel(0, sdd->regs + S3C64XX_SPI_SLAVE_SEL);
else
writel(readl(sdd->regs + S3C64XX_SPI_SLAVE_SEL)
| S3C64XX_SPI_SLAVE_AUTO | S3C64XX_SPI_SLAVE_NSC_CNT_2,
sdd->regs + S3C64XX_SPI_SLAVE_SEL);
spin_unlock_irqrestore(&sdd->lock, flags);
......@@ -866,13 +864,15 @@ static int s3c64xx_spi_setup(struct spi_device *spi)
}
pm_runtime_put(&sdd->pdev->dev);
writel(S3C64XX_SPI_SLAVE_SIG_INACT, sdd->regs + S3C64XX_SPI_SLAVE_SEL);
if (!(sdd->port_conf->quirks & S3C64XX_SPI_QUIRK_CS_AUTO))
writel(S3C64XX_SPI_SLAVE_SIG_INACT, sdd->regs + S3C64XX_SPI_SLAVE_SEL);
return 0;
setup_exit:
pm_runtime_put(&sdd->pdev->dev);
/* setup() returns with device de-selected */
writel(S3C64XX_SPI_SLAVE_SIG_INACT, sdd->regs + S3C64XX_SPI_SLAVE_SEL);
if (!(sdd->port_conf->quirks & S3C64XX_SPI_QUIRK_CS_AUTO))
writel(S3C64XX_SPI_SLAVE_SIG_INACT, sdd->regs + S3C64XX_SPI_SLAVE_SEL);
if (gpio_is_valid(spi->cs_gpio))
gpio_free(spi->cs_gpio);
......@@ -946,7 +946,8 @@ static void s3c64xx_spi_hwinit(struct s3c64xx_spi_driver_data *sdd, int channel)
sdd->cur_speed = 0;
writel(S3C64XX_SPI_SLAVE_SIG_INACT, sdd->regs + S3C64XX_SPI_SLAVE_SEL);
if (!(sdd->port_conf->quirks & S3C64XX_SPI_QUIRK_CS_AUTO))
writel(S3C64XX_SPI_SLAVE_SIG_INACT, sdd->regs + S3C64XX_SPI_SLAVE_SEL);
/* Disable Interrupts - we use Polling if not DMA mode */
writel(0, regs + S3C64XX_SPI_INT_EN);
......@@ -1341,6 +1342,15 @@ static struct s3c64xx_spi_port_config exynos5440_spi_port_config = {
.quirks = S3C64XX_SPI_QUIRK_POLL,
};
static struct s3c64xx_spi_port_config exynos7_spi_port_config = {
.fifo_lvl_mask = { 0x1ff, 0x7F, 0x7F, 0x7F, 0x7F, 0x1ff},
.rx_lvl_offset = 15,
.tx_st_done = 25,
.high_speed = true,
.clk_from_cmu = true,
.quirks = S3C64XX_SPI_QUIRK_CS_AUTO,
};
static struct platform_device_id s3c64xx_spi_driver_ids[] = {
{
.name = "s3c2443-spi",
......@@ -1374,6 +1384,9 @@ static const struct of_device_id s3c64xx_spi_dt_match[] = {
{ .compatible = "samsung,exynos5440-spi",
.data = (void *)&exynos5440_spi_port_config,
},
{ .compatible = "samsung,exynos7-spi",
.data = (void *)&exynos7_spi_port_config,
},
{ },
};
MODULE_DEVICE_TABLE(of, s3c64xx_spi_dt_match);
......
......@@ -23,6 +23,7 @@
#include <linux/dmaengine.h>
#include <linux/dma-direction.h>
#include <linux/dma-mapping.h>
#include <linux/reset.h>
#define DRIVER_NAME "sirfsoc_spi"
......@@ -134,6 +135,7 @@
ALIGNED(x->len) && (x->len < 2 * PAGE_SIZE))
#define SIRFSOC_MAX_CMD_BYTES 4
#define SIRFSOC_SPI_DEFAULT_FRQ 1000000
struct sirfsoc_spi {
struct spi_bitbang bitbang;
......@@ -629,9 +631,6 @@ static int spi_sirfsoc_setup(struct spi_device *spi)
{
struct sirfsoc_spi *sspi;
if (!spi->max_speed_hz)
return -EINVAL;
sspi = spi_master_get_devdata(spi->master);
if (spi->cs_gpio == -ENOENT)
......@@ -649,6 +648,12 @@ static int spi_sirfsoc_probe(struct platform_device *pdev)
int irq;
int i, ret;
ret = device_reset(&pdev->dev);
if (ret) {
dev_err(&pdev->dev, "SPI reset failed!\n");
return ret;
}
master = spi_alloc_master(&pdev->dev, sizeof(*sspi));
if (!master) {
dev_err(&pdev->dev, "Unable to allocate SPI master\n");
......@@ -683,6 +688,7 @@ static int spi_sirfsoc_probe(struct platform_device *pdev)
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_LSB_FIRST | SPI_CS_HIGH;
master->bits_per_word_mask = SPI_BPW_MASK(8) | SPI_BPW_MASK(12) |
SPI_BPW_MASK(16) | SPI_BPW_MASK(32);
master->max_speed_hz = SIRFSOC_SPI_DEFAULT_FRQ;
sspi->bitbang.master->dev.of_node = pdev->dev.of_node;
/* request DMA channels */
......
......@@ -402,8 +402,7 @@ static int txx9spi_probe(struct platform_device *dev)
exit:
if (c->workqueue)
destroy_workqueue(c->workqueue);
if (c->clk)
clk_disable(c->clk);
clk_disable(c->clk);
spi_master_put(master);
return ret;
}
......
......@@ -1001,7 +1001,7 @@ static int spi_init_queue(struct spi_master *master)
dev_name(&master->dev));
if (IS_ERR(master->kworker_task)) {
dev_err(&master->dev, "failed to create message pump task\n");
return -ENOMEM;
return PTR_ERR(master->kworker_task);
}
init_kthread_work(&master->pump_messages, spi_pump_messages);
......
......@@ -87,6 +87,7 @@ struct spidev_data {
unsigned users;
u8 *tx_buffer;
u8 *rx_buffer;
u32 speed_hz;
};
static LIST_HEAD(device_list);
......@@ -138,6 +139,7 @@ spidev_sync_write(struct spidev_data *spidev, size_t len)
struct spi_transfer t = {
.tx_buf = spidev->tx_buffer,
.len = len,
.speed_hz = spidev->speed_hz,
};
struct spi_message m;
......@@ -152,6 +154,7 @@ spidev_sync_read(struct spidev_data *spidev, size_t len)
struct spi_transfer t = {
.rx_buf = spidev->rx_buffer,
.len = len,
.speed_hz = spidev->speed_hz,
};
struct spi_message m;
......@@ -274,6 +277,8 @@ static int spidev_message(struct spidev_data *spidev,
k_tmp->bits_per_word = u_tmp->bits_per_word;
k_tmp->delay_usecs = u_tmp->delay_usecs;
k_tmp->speed_hz = u_tmp->speed_hz;
if (!k_tmp->speed_hz)
k_tmp->speed_hz = spidev->speed_hz;
#ifdef VERBOSE
dev_dbg(&spidev->spi->dev,
" xfer len %zd %s%s%s%dbits %u usec %uHz\n",
......@@ -377,7 +382,7 @@ spidev_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
retval = __put_user(spi->bits_per_word, (__u8 __user *)arg);
break;
case SPI_IOC_RD_MAX_SPEED_HZ:
retval = __put_user(spi->max_speed_hz, (__u32 __user *)arg);
retval = __put_user(spidev->speed_hz, (__u32 __user *)arg);
break;
/* write requests */
......@@ -441,10 +446,11 @@ spidev_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
spi->max_speed_hz = tmp;
retval = spi_setup(spi);
if (retval < 0)
spi->max_speed_hz = save;
if (retval >= 0)
spidev->speed_hz = tmp;
else
dev_dbg(&spi->dev, "%d Hz (max)\n", tmp);
spi->max_speed_hz = save;
}
break;
......@@ -570,6 +576,8 @@ static int spidev_release(struct inode *inode, struct file *filp)
kfree(spidev->rx_buffer);
spidev->rx_buffer = NULL;
spidev->speed_hz = spidev->spi->max_speed_hz;
/* ... after we unbound from the underlying device? */
spin_lock_irq(&spidev->spi_lock);
dofree = (spidev->spi == NULL);
......@@ -650,6 +658,8 @@ static int spidev_probe(struct spi_device *spi)
}
mutex_unlock(&device_list_lock);
spidev->speed_hz = spi->max_speed_hz;
if (status == 0)
spi_set_drvdata(spi, spidev);
else
......
......@@ -108,6 +108,25 @@
#define SSCR1_RxTresh(x) (((x) - 1) << 10) /* level [1..4] */
#endif
/* QUARK_X1000 SSCR0 bit definition */
#define QUARK_X1000_SSCR0_DSS (0x1F) /* Data Size Select (mask) */
#define QUARK_X1000_SSCR0_DataSize(x) ((x) - 1) /* Data Size Select [4..32] */
#define QUARK_X1000_SSCR0_FRF (0x3 << 5) /* FRame Format (mask) */
#define QUARK_X1000_SSCR0_Motorola (0x0 << 5) /* Motorola's Serial Peripheral Interface (SPI) */
#define RX_THRESH_QUARK_X1000_DFLT 1
#define TX_THRESH_QUARK_X1000_DFLT 16
#define QUARK_X1000_SSSR_TFL_MASK (0x1F << 8) /* Transmit FIFO Level mask */
#define QUARK_X1000_SSSR_RFL_MASK (0x1F << 13) /* Receive FIFO Level mask */
#define QUARK_X1000_SSCR1_TFT (0x1F << 6) /* Transmit FIFO Threshold (mask) */
#define QUARK_X1000_SSCR1_TxTresh(x) (((x) - 1) << 6) /* level [1..32] */
#define QUARK_X1000_SSCR1_RFT (0x1F << 11) /* Receive FIFO Threshold (mask) */
#define QUARK_X1000_SSCR1_RxTresh(x) (((x) - 1) << 11) /* level [1..32] */
#define QUARK_X1000_SSCR1_STRF (1 << 17) /* Select FIFO or EFWR */
#define QUARK_X1000_SSCR1_EFWR (1 << 16) /* Enable FIFO Write/Read */
/* extra bits in PXA255, PXA26x and PXA27x SSP ports */
#define SSCR0_TISSP (1 << 4) /* TI Sync Serial Protocol */
#define SSCR0_PSP (3 << 4) /* PSP - Programmable Serial Protocol */
......@@ -175,6 +194,7 @@ enum pxa_ssp_type {
PXA910_SSP,
CE4100_SSP,
LPSS_SSP,
QUARK_X1000_SSP,
};
struct ssp_device {
......
......@@ -1049,4 +1049,10 @@ spi_unregister_device(struct spi_device *spi)
extern const struct spi_device_id *
spi_get_device_id(const struct spi_device *sdev);
static inline bool
spi_transfer_is_last(struct spi_master *master, struct spi_transfer *xfer)
{
return list_is_last(&xfer->transfer_list, &master->cur_msg->transfers);
}
#endif /* __LINUX_SPI_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment