Commit 5a602e15 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'spi-v4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi

Pull spi updates from Mark Brown:
 "No framework updates for the SPI API this time around aside from one
  small fix, just driver improvments.  Some highlights include:

   - New driver support for CSR USP, Mikrotik RB4xx and Zynq GQSPI
     controllers.

   - Modernisation of the OMAP McSPI controller driver, moving it to
     current APIs to enable support for a wider range of client drivers.

   - DMA support for the bcm2835 controller"

* tag 'spi-v4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi: (60 commits)
  spi: zynq: Remove execute bit
  spi: atmel: add support to FIFOs
  spi: atmel: update DT bindings documentation
  spi: spi-fsl-dspi: Update DT binding documentation
  spi: pxa2xx: Constify ACPI device ids
  spi: Add support for Zynq Ultrascale+ MPSoC GQSPI controller
  spi: zynq: Add DT bindings documentation for Zynq Ultrascale+ MPSoC GQSPI controller
  spi: fsl-dspi: Use pinctrl PM helpers
  spi: davinci: change the lower limit of pre-scale divider to 1
  spi: spi-fsl-dspi: Change the way of increasing spi_message->actual_length
  spi: spi-fsl-dspi: Enable TCF interrupt mode support
  spi: atmel: add support for the internal chip-select of the spi controller
  spi: spi-pxa2xx: remove legacy PXA DMA bits
  spi: pxa2xx: Make LPSS SPI general register optional
  spi: pxa2xx: Prepare for new Intel LPSS SPI type
  spi: pxa2xx: Differentiate Intel LPSS types
  spi: restore rx/tx_buf in case of unset CONFIG_HAS_DMA
  spi: rspi: Re-do the returning value of qspi_transfer_out_in
  spi: rspi: modify the name of "qspi_trigger_transfer_out_int" function
  spi: orion: Fix extended baud rates for each Armada SoCs
  ...
parents e12bdf0d fda052b0
Binding for Qualcomm Atheros AR7xxx/AR9xxx SPI controller
Required properties:
- compatible: has to be "qca,<soc-type>-spi", "qca,ar7100-spi" as fallback.
- reg: Base address and size of the controllers memory area
- clocks: phandle to the AHB clock.
- clock-names: has to be "ahb".
- #address-cells: <1>, as required by generic SPI binding.
- #size-cells: <0>, also as required by generic SPI binding.
Child nodes as per the generic SPI binding.
Example:
spi@1F000000 {
compatible = "qca,ar9132-spi", "qca,ar7100-spi";
reg = <0x1F000000 0x10>;
clocks = <&pll 2>;
clock-names = "ahb";
#address-cells = <1>;
#size-cells = <0>;
};
ARM Freescale DSPI controller ARM Freescale DSPI controller
Required properties: Required properties:
- compatible : "fsl,vf610-dspi" - compatible : "fsl,vf610-dspi", "fsl,ls1021a-v1.0-dspi", "fsl,ls2085a-dspi"
- reg : Offset and length of the register set for the device - reg : Offset and length of the register set for the device
- interrupts : Should contain SPI controller interrupt - interrupts : Should contain SPI controller interrupt
- clocks: from common clock binding: handle to dspi clock. - clocks: from common clock binding: handle to dspi clock.
......
Marvell Orion SPI device Marvell Orion SPI device
Required properties: Required properties:
- compatible : should be "marvell,orion-spi" or "marvell,armada-370-spi". - compatible : should be on of the following:
- "marvell,orion-spi" for the Orion, mv78x00, Kirkwood and Dove SoCs
- "marvell,armada-370-spi", for the Armada 370 SoCs
- "marvell,armada-375-spi", for the Armada 375 SoCs
- "marvell,armada-380-spi", for the Armada 38x SoCs
- "marvell,armada-390-spi", for the Armada 39x SoCs
- "marvell,armada-xp-spi", for the Armada XP SoCs
- reg : offset and length of the register set for the device - reg : offset and length of the register set for the device
- cell-index : Which of multiple SPI controllers is this. - cell-index : Which of multiple SPI controllers is this.
Optional properties: Optional properties:
......
* CSR SiRFprimaII Serial Peripheral Interface * CSR SiRFprimaII Serial Peripheral Interface
Required properties: Required properties:
- compatible : Should be "sirf,prima2-spi" - compatible : Should be "sirf,prima2-spi", "sirf,prima2-usp"
or "sirf,atlas7-usp"
- reg : Offset and length of the register set for the device - reg : Offset and length of the register set for the device
- interrupts : Should contain SPI interrupt - interrupts : Should contain SPI interrupt
- resets: phandle to the reset controller asserting this device in - resets: phandle to the reset controller asserting this device in
......
Xilinx Zynq UltraScale+ MPSoC GQSPI controller Device Tree Bindings
-------------------------------------------------------------------
Required properties:
- compatible : Should be "xlnx,zynqmp-qspi-1.0".
- reg : Physical base address and size of GQSPI registers map.
- interrupts : Property with a value describing the interrupt
number.
- interrupt-parent : Must be core interrupt controller.
- clock-names : List of input clock names - "ref_clk", "pclk"
(See clock bindings for details).
- clocks : Clock phandles (see clock bindings for details).
Optional properties:
- num-cs : Number of chip selects used.
Example:
qspi: spi@ff0f0000 {
compatible = "xlnx,zynqmp-qspi-1.0";
clock-names = "ref_clk", "pclk";
clocks = <&misc_clk &misc_clk>;
interrupts = <0 15 4>;
interrupt-parent = <&gic>;
num-cs = <1>;
reg = <0x0 0xff0f0000 0x1000>,<0x0 0xc0000000 0x8000000>;
};
...@@ -4,11 +4,16 @@ Required properties: ...@@ -4,11 +4,16 @@ Required properties:
- compatible : should be "atmel,at91rm9200-spi". - compatible : should be "atmel,at91rm9200-spi".
- reg: Address and length of the register set for the device - reg: Address and length of the register set for the device
- interrupts: Should contain spi interrupt - interrupts: Should contain spi interrupt
- cs-gpios: chipselects - cs-gpios: chipselects (optional for SPI controller version >= 2 with the
Chip Select Active After Transfer feature).
- clock-names: tuple listing input clock names. - clock-names: tuple listing input clock names.
Required elements: "spi_clk" Required elements: "spi_clk"
- clocks: phandles to input clocks. - clocks: phandles to input clocks.
Optional properties:
- atmel,fifo-size: maximum number of data the RX and TX FIFOs can store for FIFO
capable SPI controllers.
Example: Example:
spi1: spi@fffcc000 { spi1: spi@fffcc000 {
...@@ -20,6 +25,7 @@ spi1: spi@fffcc000 { ...@@ -20,6 +25,7 @@ spi1: spi@fffcc000 {
clocks = <&spi1_clk>; clocks = <&spi1_clk>;
clock-names = "spi_clk"; clock-names = "spi_clk";
cs-gpios = <&pioB 3 0>; cs-gpios = <&pioB 3 0>;
atmel,fifo-size = <32>;
status = "okay"; status = "okay";
mmc-slot@0 { mmc-slot@0 {
......
...@@ -4,9 +4,9 @@ Required properties: ...@@ -4,9 +4,9 @@ Required properties:
- compatible : "arm,pl022", "arm,primecell" - compatible : "arm,pl022", "arm,primecell"
- reg : Offset and length of the register set for the device - reg : Offset and length of the register set for the device
- interrupts : Should contain SPI controller interrupt - interrupts : Should contain SPI controller interrupt
- num-cs : total number of chipselects
Optional properties: Optional properties:
- num-cs : total number of chipselects
- cs-gpios : should specify GPIOs used for chipselects. - cs-gpios : should specify GPIOs used for chipselects.
The gpios will be referred to as reg = <index> in the SPI child nodes. The gpios will be referred to as reg = <index> in the SPI child nodes.
If unspecified, a single SPI device without a chip select can be used. If unspecified, a single SPI device without a chip select can be used.
......
...@@ -16,8 +16,4 @@ struct ath79_spi_platform_data { ...@@ -16,8 +16,4 @@ struct ath79_spi_platform_data {
unsigned num_chipselect; unsigned num_chipselect;
}; };
struct ath79_spi_controller_data {
unsigned gpio;
};
#endif /* _ATH79_SPI_PLATFORM_H */ #endif /* _ATH79_SPI_PLATFORM_H */
...@@ -77,6 +77,7 @@ config SPI_ATMEL ...@@ -77,6 +77,7 @@ config SPI_ATMEL
config SPI_BCM2835 config SPI_BCM2835
tristate "BCM2835 SPI controller" tristate "BCM2835 SPI controller"
depends on GPIOLIB
depends on ARCH_BCM2835 || COMPILE_TEST depends on ARCH_BCM2835 || COMPILE_TEST
depends on GPIOLIB depends on GPIOLIB
help help
...@@ -221,7 +222,7 @@ config SPI_FALCON ...@@ -221,7 +222,7 @@ config SPI_FALCON
config SPI_GPIO config SPI_GPIO
tristate "GPIO-based bitbanging SPI Master" tristate "GPIO-based bitbanging SPI Master"
depends on GPIOLIB depends on GPIOLIB || COMPILE_TEST
select SPI_BITBANG select SPI_BITBANG
help help
This simple GPIO bitbanging SPI master uses the arch-neutral GPIO This simple GPIO bitbanging SPI master uses the arch-neutral GPIO
...@@ -327,7 +328,7 @@ config SPI_MESON_SPIFC ...@@ -327,7 +328,7 @@ config SPI_MESON_SPIFC
config SPI_OC_TINY config SPI_OC_TINY
tristate "OpenCores tiny SPI" tristate "OpenCores tiny SPI"
depends on GPIOLIB depends on GPIOLIB || COMPILE_TEST
select SPI_BITBANG select SPI_BITBANG
help help
This is the driver for OpenCores tiny SPI master controller. This is the driver for OpenCores tiny SPI master controller.
...@@ -394,16 +395,9 @@ config SPI_PPC4xx ...@@ -394,16 +395,9 @@ config SPI_PPC4xx
help help
This selects a driver for the PPC4xx SPI Controller. This selects a driver for the PPC4xx SPI Controller.
config SPI_PXA2XX_PXADMA
bool "PXA2xx SSP legacy PXA DMA API support"
depends on SPI_PXA2XX && ARCH_PXA
help
Enable PXA private legacy DMA API support. Note that this is
deprecated in favor of generic DMA engine API.
config SPI_PXA2XX_DMA config SPI_PXA2XX_DMA
def_bool y def_bool y
depends on SPI_PXA2XX && !SPI_PXA2XX_PXADMA depends on SPI_PXA2XX
config SPI_PXA2XX config SPI_PXA2XX
tristate "PXA2xx SSP SPI master" tristate "PXA2xx SSP SPI master"
...@@ -429,6 +423,12 @@ config SPI_ROCKCHIP ...@@ -429,6 +423,12 @@ config SPI_ROCKCHIP
The main usecase of this controller is to use spi flash as boot The main usecase of this controller is to use spi flash as boot
device. device.
config SPI_RB4XX
tristate "Mikrotik RB4XX SPI master"
depends on SPI_MASTER && ATH79
help
SPI controller driver for the Mikrotik RB4xx series boards.
config SPI_RSPI config SPI_RSPI
tristate "Renesas RSPI/QSPI controller" tristate "Renesas RSPI/QSPI controller"
depends on SUPERH || ARCH_SHMOBILE || COMPILE_TEST depends on SUPERH || ARCH_SHMOBILE || COMPILE_TEST
...@@ -610,6 +610,12 @@ config SPI_XTENSA_XTFPGA ...@@ -610,6 +610,12 @@ config SPI_XTENSA_XTFPGA
16 bit words in SPI mode 0, automatically asserting CS on transfer 16 bit words in SPI mode 0, automatically asserting CS on transfer
start and deasserting on end. start and deasserting on end.
config SPI_ZYNQMP_GQSPI
tristate "Xilinx ZynqMP GQSPI controller"
depends on SPI_MASTER
help
Enables Xilinx GQSPI controller driver for Zynq UltraScale+ MPSoC.
config SPI_NUC900 config SPI_NUC900
tristate "Nuvoton NUC900 series SPI" tristate "Nuvoton NUC900 series SPI"
depends on ARCH_W90X900 depends on ARCH_W90X900
......
...@@ -60,12 +60,12 @@ obj-$(CONFIG_SPI_ORION) += spi-orion.o ...@@ -60,12 +60,12 @@ obj-$(CONFIG_SPI_ORION) += spi-orion.o
obj-$(CONFIG_SPI_PL022) += spi-pl022.o obj-$(CONFIG_SPI_PL022) += spi-pl022.o
obj-$(CONFIG_SPI_PPC4xx) += spi-ppc4xx.o obj-$(CONFIG_SPI_PPC4xx) += spi-ppc4xx.o
spi-pxa2xx-platform-objs := spi-pxa2xx.o spi-pxa2xx-platform-objs := spi-pxa2xx.o
spi-pxa2xx-platform-$(CONFIG_SPI_PXA2XX_PXADMA) += spi-pxa2xx-pxadma.o
spi-pxa2xx-platform-$(CONFIG_SPI_PXA2XX_DMA) += spi-pxa2xx-dma.o spi-pxa2xx-platform-$(CONFIG_SPI_PXA2XX_DMA) += spi-pxa2xx-dma.o
obj-$(CONFIG_SPI_PXA2XX) += spi-pxa2xx-platform.o obj-$(CONFIG_SPI_PXA2XX) += spi-pxa2xx-platform.o
obj-$(CONFIG_SPI_PXA2XX_PCI) += spi-pxa2xx-pci.o obj-$(CONFIG_SPI_PXA2XX_PCI) += spi-pxa2xx-pci.o
obj-$(CONFIG_SPI_QUP) += spi-qup.o obj-$(CONFIG_SPI_QUP) += spi-qup.o
obj-$(CONFIG_SPI_ROCKCHIP) += spi-rockchip.o obj-$(CONFIG_SPI_ROCKCHIP) += spi-rockchip.o
obj-$(CONFIG_SPI_RB4XX) += spi-rb4xx.o
obj-$(CONFIG_SPI_RSPI) += spi-rspi.o obj-$(CONFIG_SPI_RSPI) += spi-rspi.o
obj-$(CONFIG_SPI_S3C24XX) += spi-s3c24xx-hw.o obj-$(CONFIG_SPI_S3C24XX) += spi-s3c24xx-hw.o
spi-s3c24xx-hw-y := spi-s3c24xx.o spi-s3c24xx-hw-y := spi-s3c24xx.o
...@@ -89,3 +89,4 @@ obj-$(CONFIG_SPI_TXX9) += spi-txx9.o ...@@ -89,3 +89,4 @@ obj-$(CONFIG_SPI_TXX9) += spi-txx9.o
obj-$(CONFIG_SPI_XCOMM) += spi-xcomm.o obj-$(CONFIG_SPI_XCOMM) += spi-xcomm.o
obj-$(CONFIG_SPI_XILINX) += spi-xilinx.o obj-$(CONFIG_SPI_XILINX) += spi-xilinx.o
obj-$(CONFIG_SPI_XTENSA_XTFPGA) += spi-xtensa-xtfpga.o obj-$(CONFIG_SPI_XTENSA_XTFPGA) += spi-xtensa-xtfpga.o
obj-$(CONFIG_SPI_ZYNQMP_GQSPI) += spi-zynqmp-gqspi.o
...@@ -79,10 +79,8 @@ static void ath79_spi_chipselect(struct spi_device *spi, int is_active) ...@@ -79,10 +79,8 @@ static void ath79_spi_chipselect(struct spi_device *spi, int is_active)
} }
if (spi->chip_select) { if (spi->chip_select) {
struct ath79_spi_controller_data *cdata = spi->controller_data;
/* SPI is normally active-low */ /* SPI is normally active-low */
gpio_set_value(cdata->gpio, cs_high); gpio_set_value(spi->cs_gpio, cs_high);
} else { } else {
if (cs_high) if (cs_high)
sp->ioc_base |= AR71XX_SPI_IOC_CS0; sp->ioc_base |= AR71XX_SPI_IOC_CS0;
...@@ -117,11 +115,10 @@ static void ath79_spi_disable(struct ath79_spi *sp) ...@@ -117,11 +115,10 @@ static void ath79_spi_disable(struct ath79_spi *sp)
static int ath79_spi_setup_cs(struct spi_device *spi) static int ath79_spi_setup_cs(struct spi_device *spi)
{ {
struct ath79_spi_controller_data *cdata; struct ath79_spi *sp = ath79_spidev_to_sp(spi);
int status; int status;
cdata = spi->controller_data; if (spi->chip_select && !gpio_is_valid(spi->cs_gpio))
if (spi->chip_select && !cdata)
return -EINVAL; return -EINVAL;
status = 0; status = 0;
...@@ -134,8 +131,15 @@ static int ath79_spi_setup_cs(struct spi_device *spi) ...@@ -134,8 +131,15 @@ static int ath79_spi_setup_cs(struct spi_device *spi)
else else
flags |= GPIOF_INIT_HIGH; flags |= GPIOF_INIT_HIGH;
status = gpio_request_one(cdata->gpio, flags, status = gpio_request_one(spi->cs_gpio, flags,
dev_name(&spi->dev)); dev_name(&spi->dev));
} else {
if (spi->mode & SPI_CS_HIGH)
sp->ioc_base &= ~AR71XX_SPI_IOC_CS0;
else
sp->ioc_base |= AR71XX_SPI_IOC_CS0;
ath79_spi_wr(sp, AR71XX_SPI_REG_IOC, sp->ioc_base);
} }
return status; return status;
...@@ -144,8 +148,7 @@ static int ath79_spi_setup_cs(struct spi_device *spi) ...@@ -144,8 +148,7 @@ static int ath79_spi_setup_cs(struct spi_device *spi)
static void ath79_spi_cleanup_cs(struct spi_device *spi) static void ath79_spi_cleanup_cs(struct spi_device *spi)
{ {
if (spi->chip_select) { if (spi->chip_select) {
struct ath79_spi_controller_data *cdata = spi->controller_data; gpio_free(spi->cs_gpio);
gpio_free(cdata->gpio);
} }
} }
...@@ -217,6 +220,7 @@ static int ath79_spi_probe(struct platform_device *pdev) ...@@ -217,6 +220,7 @@ static int ath79_spi_probe(struct platform_device *pdev)
} }
sp = spi_master_get_devdata(master); sp = spi_master_get_devdata(master);
master->dev.of_node = pdev->dev.of_node;
platform_set_drvdata(pdev, sp); platform_set_drvdata(pdev, sp);
pdata = dev_get_platdata(&pdev->dev); pdata = dev_get_platdata(&pdev->dev);
...@@ -253,7 +257,7 @@ static int ath79_spi_probe(struct platform_device *pdev) ...@@ -253,7 +257,7 @@ static int ath79_spi_probe(struct platform_device *pdev)
goto err_put_master; goto err_put_master;
} }
ret = clk_enable(sp->clk); ret = clk_prepare_enable(sp->clk);
if (ret) if (ret)
goto err_put_master; goto err_put_master;
...@@ -277,7 +281,7 @@ static int ath79_spi_probe(struct platform_device *pdev) ...@@ -277,7 +281,7 @@ static int ath79_spi_probe(struct platform_device *pdev)
err_disable: err_disable:
ath79_spi_disable(sp); ath79_spi_disable(sp);
err_clk_disable: err_clk_disable:
clk_disable(sp->clk); clk_disable_unprepare(sp->clk);
err_put_master: err_put_master:
spi_master_put(sp->bitbang.master); spi_master_put(sp->bitbang.master);
...@@ -290,7 +294,7 @@ static int ath79_spi_remove(struct platform_device *pdev) ...@@ -290,7 +294,7 @@ static int ath79_spi_remove(struct platform_device *pdev)
spi_bitbang_stop(&sp->bitbang); spi_bitbang_stop(&sp->bitbang);
ath79_spi_disable(sp); ath79_spi_disable(sp);
clk_disable(sp->clk); clk_disable_unprepare(sp->clk);
spi_master_put(sp->bitbang.master); spi_master_put(sp->bitbang.master);
return 0; return 0;
...@@ -301,12 +305,18 @@ static void ath79_spi_shutdown(struct platform_device *pdev) ...@@ -301,12 +305,18 @@ static void ath79_spi_shutdown(struct platform_device *pdev)
ath79_spi_remove(pdev); ath79_spi_remove(pdev);
} }
static const struct of_device_id ath79_spi_of_match[] = {
{ .compatible = "qca,ar7100-spi", },
{ },
};
static struct platform_driver ath79_spi_driver = { static struct platform_driver ath79_spi_driver = {
.probe = ath79_spi_probe, .probe = ath79_spi_probe,
.remove = ath79_spi_remove, .remove = ath79_spi_remove,
.shutdown = ath79_spi_shutdown, .shutdown = ath79_spi_shutdown,
.driver = { .driver = {
.name = DRV_NAME, .name = DRV_NAME,
.of_match_table = ath79_spi_of_match,
}, },
}; };
module_platform_driver(ath79_spi_driver); module_platform_driver(ath79_spi_driver);
......
...@@ -41,6 +41,8 @@ ...@@ -41,6 +41,8 @@
#define SPI_CSR1 0x0034 #define SPI_CSR1 0x0034
#define SPI_CSR2 0x0038 #define SPI_CSR2 0x0038
#define SPI_CSR3 0x003c #define SPI_CSR3 0x003c
#define SPI_FMR 0x0040
#define SPI_FLR 0x0044
#define SPI_VERSION 0x00fc #define SPI_VERSION 0x00fc
#define SPI_RPR 0x0100 #define SPI_RPR 0x0100
#define SPI_RCR 0x0104 #define SPI_RCR 0x0104
...@@ -62,6 +64,14 @@ ...@@ -62,6 +64,14 @@
#define SPI_SWRST_SIZE 1 #define SPI_SWRST_SIZE 1
#define SPI_LASTXFER_OFFSET 24 #define SPI_LASTXFER_OFFSET 24
#define SPI_LASTXFER_SIZE 1 #define SPI_LASTXFER_SIZE 1
#define SPI_TXFCLR_OFFSET 16
#define SPI_TXFCLR_SIZE 1
#define SPI_RXFCLR_OFFSET 17
#define SPI_RXFCLR_SIZE 1
#define SPI_FIFOEN_OFFSET 30
#define SPI_FIFOEN_SIZE 1
#define SPI_FIFODIS_OFFSET 31
#define SPI_FIFODIS_SIZE 1
/* Bitfields in MR */ /* Bitfields in MR */
#define SPI_MSTR_OFFSET 0 #define SPI_MSTR_OFFSET 0
...@@ -114,6 +124,22 @@ ...@@ -114,6 +124,22 @@
#define SPI_TXEMPTY_SIZE 1 #define SPI_TXEMPTY_SIZE 1
#define SPI_SPIENS_OFFSET 16 #define SPI_SPIENS_OFFSET 16
#define SPI_SPIENS_SIZE 1 #define SPI_SPIENS_SIZE 1
#define SPI_TXFEF_OFFSET 24
#define SPI_TXFEF_SIZE 1
#define SPI_TXFFF_OFFSET 25
#define SPI_TXFFF_SIZE 1
#define SPI_TXFTHF_OFFSET 26
#define SPI_TXFTHF_SIZE 1
#define SPI_RXFEF_OFFSET 27
#define SPI_RXFEF_SIZE 1
#define SPI_RXFFF_OFFSET 28
#define SPI_RXFFF_SIZE 1
#define SPI_RXFTHF_OFFSET 29
#define SPI_RXFTHF_SIZE 1
#define SPI_TXFPTEF_OFFSET 30
#define SPI_TXFPTEF_SIZE 1
#define SPI_RXFPTEF_OFFSET 31
#define SPI_RXFPTEF_SIZE 1
/* Bitfields in CSR0 */ /* Bitfields in CSR0 */
#define SPI_CPOL_OFFSET 0 #define SPI_CPOL_OFFSET 0
...@@ -157,6 +183,22 @@ ...@@ -157,6 +183,22 @@
#define SPI_TXTDIS_OFFSET 9 #define SPI_TXTDIS_OFFSET 9
#define SPI_TXTDIS_SIZE 1 #define SPI_TXTDIS_SIZE 1
/* Bitfields in FMR */
#define SPI_TXRDYM_OFFSET 0
#define SPI_TXRDYM_SIZE 2
#define SPI_RXRDYM_OFFSET 4
#define SPI_RXRDYM_SIZE 2
#define SPI_TXFTHRES_OFFSET 16
#define SPI_TXFTHRES_SIZE 6
#define SPI_RXFTHRES_OFFSET 24
#define SPI_RXFTHRES_SIZE 6
/* Bitfields in FLR */
#define SPI_TXFL_OFFSET 0
#define SPI_TXFL_SIZE 6
#define SPI_RXFL_OFFSET 16
#define SPI_RXFL_SIZE 6
/* Constants for BITS */ /* Constants for BITS */
#define SPI_BITS_8_BPT 0 #define SPI_BITS_8_BPT 0
#define SPI_BITS_9_BPT 1 #define SPI_BITS_9_BPT 1
...@@ -167,6 +209,9 @@ ...@@ -167,6 +209,9 @@
#define SPI_BITS_14_BPT 6 #define SPI_BITS_14_BPT 6
#define SPI_BITS_15_BPT 7 #define SPI_BITS_15_BPT 7
#define SPI_BITS_16_BPT 8 #define SPI_BITS_16_BPT 8
#define SPI_ONE_DATA 0
#define SPI_TWO_DATA 1
#define SPI_FOUR_DATA 2
/* Bit manipulation macros */ /* Bit manipulation macros */
#define SPI_BIT(name) \ #define SPI_BIT(name) \
...@@ -185,11 +230,31 @@ ...@@ -185,11 +230,31 @@
__raw_readl((port)->regs + SPI_##reg) __raw_readl((port)->regs + SPI_##reg)
#define spi_writel(port, reg, value) \ #define spi_writel(port, reg, value) \
__raw_writel((value), (port)->regs + SPI_##reg) __raw_writel((value), (port)->regs + SPI_##reg)
#define spi_readw(port, reg) \
__raw_readw((port)->regs + SPI_##reg)
#define spi_writew(port, reg, value) \
__raw_writew((value), (port)->regs + SPI_##reg)
#define spi_readb(port, reg) \
__raw_readb((port)->regs + SPI_##reg)
#define spi_writeb(port, reg, value) \
__raw_writeb((value), (port)->regs + SPI_##reg)
#else #else
#define spi_readl(port, reg) \ #define spi_readl(port, reg) \
readl_relaxed((port)->regs + SPI_##reg) readl_relaxed((port)->regs + SPI_##reg)
#define spi_writel(port, reg, value) \ #define spi_writel(port, reg, value) \
writel_relaxed((value), (port)->regs + SPI_##reg) writel_relaxed((value), (port)->regs + SPI_##reg)
#define spi_readw(port, reg) \
readw_relaxed((port)->regs + SPI_##reg)
#define spi_writew(port, reg, value) \
writew_relaxed((value), (port)->regs + SPI_##reg)
#define spi_readb(port, reg) \
readb_relaxed((port)->regs + SPI_##reg)
#define spi_writeb(port, reg, value) \
writeb_relaxed((value), (port)->regs + SPI_##reg)
#endif #endif
/* use PIO for small transfers, avoiding DMA setup/teardown overhead and /* use PIO for small transfers, avoiding DMA setup/teardown overhead and
* cache operations; better heuristics consider wordsize and bitrate. * cache operations; better heuristics consider wordsize and bitrate.
...@@ -246,11 +311,14 @@ struct atmel_spi { ...@@ -246,11 +311,14 @@ struct atmel_spi {
bool use_dma; bool use_dma;
bool use_pdc; bool use_pdc;
bool use_cs_gpios;
/* dmaengine data */ /* dmaengine data */
struct atmel_spi_dma dma; struct atmel_spi_dma dma;
bool keep_cs; bool keep_cs;
bool cs_active; bool cs_active;
u32 fifo_size;
}; };
/* Controller-specific per-slave state */ /* Controller-specific per-slave state */
...@@ -321,7 +389,8 @@ static void cs_activate(struct atmel_spi *as, struct spi_device *spi) ...@@ -321,7 +389,8 @@ static void cs_activate(struct atmel_spi *as, struct spi_device *spi)
} }
mr = spi_readl(as, MR); mr = spi_readl(as, MR);
gpio_set_value(asd->npcs_pin, active); if (as->use_cs_gpios)
gpio_set_value(asd->npcs_pin, active);
} else { } else {
u32 cpol = (spi->mode & SPI_CPOL) ? SPI_BIT(CPOL) : 0; u32 cpol = (spi->mode & SPI_CPOL) ? SPI_BIT(CPOL) : 0;
int i; int i;
...@@ -337,7 +406,7 @@ static void cs_activate(struct atmel_spi *as, struct spi_device *spi) ...@@ -337,7 +406,7 @@ static void cs_activate(struct atmel_spi *as, struct spi_device *spi)
mr = spi_readl(as, MR); mr = spi_readl(as, MR);
mr = SPI_BFINS(PCS, ~(1 << spi->chip_select), mr); mr = SPI_BFINS(PCS, ~(1 << spi->chip_select), mr);
if (spi->chip_select != 0) if (as->use_cs_gpios && spi->chip_select != 0)
gpio_set_value(asd->npcs_pin, active); gpio_set_value(asd->npcs_pin, active);
spi_writel(as, MR, mr); spi_writel(as, MR, mr);
} }
...@@ -366,7 +435,9 @@ static void cs_deactivate(struct atmel_spi *as, struct spi_device *spi) ...@@ -366,7 +435,9 @@ static void cs_deactivate(struct atmel_spi *as, struct spi_device *spi)
asd->npcs_pin, active ? " (low)" : "", asd->npcs_pin, active ? " (low)" : "",
mr); mr);
if (atmel_spi_is_v2(as) || spi->chip_select != 0) if (!as->use_cs_gpios)
spi_writel(as, CR, SPI_BIT(LASTXFER));
else if (atmel_spi_is_v2(as) || spi->chip_select != 0)
gpio_set_value(asd->npcs_pin, !active); gpio_set_value(asd->npcs_pin, !active);
} }
...@@ -406,6 +477,20 @@ static int atmel_spi_dma_slave_config(struct atmel_spi *as, ...@@ -406,6 +477,20 @@ static int atmel_spi_dma_slave_config(struct atmel_spi *as,
slave_config->dst_maxburst = 1; slave_config->dst_maxburst = 1;
slave_config->device_fc = false; slave_config->device_fc = false;
/*
* This driver uses fixed peripheral select mode (PS bit set to '0' in
* the Mode Register).
* So according to the datasheet, when FIFOs are available (and
* enabled), the Transmit FIFO operates in Multiple Data Mode.
* In this mode, up to 2 data, not 4, can be written into the Transmit
* Data Register in a single access.
* However, the first data has to be written into the lowest 16 bits and
* the second data into the highest 16 bits of the Transmit
* Data Register. For 8bit data (the most frequent case), it would
* require to rework tx_buf so each data would actualy fit 16 bits.
* So we'd rather write only one data at the time. Hence the transmit
* path works the same whether FIFOs are available (and enabled) or not.
*/
slave_config->direction = DMA_MEM_TO_DEV; slave_config->direction = DMA_MEM_TO_DEV;
if (dmaengine_slave_config(as->dma.chan_tx, slave_config)) { if (dmaengine_slave_config(as->dma.chan_tx, slave_config)) {
dev_err(&as->pdev->dev, dev_err(&as->pdev->dev,
...@@ -413,6 +498,14 @@ static int atmel_spi_dma_slave_config(struct atmel_spi *as, ...@@ -413,6 +498,14 @@ static int atmel_spi_dma_slave_config(struct atmel_spi *as,
err = -EINVAL; err = -EINVAL;
} }
/*
* This driver configures the spi controller for master mode (MSTR bit
* set to '1' in the Mode Register).
* So according to the datasheet, when FIFOs are available (and
* enabled), the Receive FIFO operates in Single Data Mode.
* So the receive path works the same whether FIFOs are available (and
* enabled) or not.
*/
slave_config->direction = DMA_DEV_TO_MEM; slave_config->direction = DMA_DEV_TO_MEM;
if (dmaengine_slave_config(as->dma.chan_rx, slave_config)) { if (dmaengine_slave_config(as->dma.chan_rx, slave_config)) {
dev_err(&as->pdev->dev, dev_err(&as->pdev->dev,
...@@ -502,10 +595,10 @@ static void dma_callback(void *data) ...@@ -502,10 +595,10 @@ static void dma_callback(void *data)
} }
/* /*
* Next transfer using PIO. * Next transfer using PIO without FIFO.
*/ */
static void atmel_spi_next_xfer_pio(struct spi_master *master, static void atmel_spi_next_xfer_single(struct spi_master *master,
struct spi_transfer *xfer) struct spi_transfer *xfer)
{ {
struct atmel_spi *as = spi_master_get_devdata(master); struct atmel_spi *as = spi_master_get_devdata(master);
unsigned long xfer_pos = xfer->len - as->current_remaining_bytes; unsigned long xfer_pos = xfer->len - as->current_remaining_bytes;
...@@ -537,6 +630,99 @@ static void atmel_spi_next_xfer_pio(struct spi_master *master, ...@@ -537,6 +630,99 @@ static void atmel_spi_next_xfer_pio(struct spi_master *master,
spi_writel(as, IER, SPI_BIT(RDRF) | SPI_BIT(OVRES)); spi_writel(as, IER, SPI_BIT(RDRF) | SPI_BIT(OVRES));
} }
/*
* Next transfer using PIO with FIFO.
*/
static void atmel_spi_next_xfer_fifo(struct spi_master *master,
struct spi_transfer *xfer)
{
struct atmel_spi *as = spi_master_get_devdata(master);
u32 current_remaining_data, num_data;
u32 offset = xfer->len - as->current_remaining_bytes;
const u16 *words = (const u16 *)((u8 *)xfer->tx_buf + offset);
const u8 *bytes = (const u8 *)((u8 *)xfer->tx_buf + offset);
u16 td0, td1;
u32 fifomr;
dev_vdbg(master->dev.parent, "atmel_spi_next_xfer_fifo\n");
/* Compute the number of data to transfer in the current iteration */
current_remaining_data = ((xfer->bits_per_word > 8) ?
((u32)as->current_remaining_bytes >> 1) :
(u32)as->current_remaining_bytes);
num_data = min(current_remaining_data, as->fifo_size);
/* Flush RX and TX FIFOs */
spi_writel(as, CR, SPI_BIT(RXFCLR) | SPI_BIT(TXFCLR));
while (spi_readl(as, FLR))
cpu_relax();
/* Set RX FIFO Threshold to the number of data to transfer */
fifomr = spi_readl(as, FMR);
spi_writel(as, FMR, SPI_BFINS(RXFTHRES, num_data, fifomr));
/* Clear FIFO flags in the Status Register, especially RXFTHF */
(void)spi_readl(as, SR);
/* Fill TX FIFO */
while (num_data >= 2) {
if (xfer->tx_buf) {
if (xfer->bits_per_word > 8) {
td0 = *words++;
td1 = *words++;
} else {
td0 = *bytes++;
td1 = *bytes++;
}
} else {
td0 = 0;
td1 = 0;
}
spi_writel(as, TDR, (td1 << 16) | td0);
num_data -= 2;
}
if (num_data) {
if (xfer->tx_buf) {
if (xfer->bits_per_word > 8)
td0 = *words++;
else
td0 = *bytes++;
} else {
td0 = 0;
}
spi_writew(as, TDR, td0);
num_data--;
}
dev_dbg(master->dev.parent,
" start fifo xfer %p: len %u tx %p rx %p bitpw %d\n",
xfer, xfer->len, xfer->tx_buf, xfer->rx_buf,
xfer->bits_per_word);
/*
* Enable RX FIFO Threshold Flag interrupt to be notified about
* transfer completion.
*/
spi_writel(as, IER, SPI_BIT(RXFTHF) | SPI_BIT(OVRES));
}
/*
* Next transfer using PIO.
*/
static void atmel_spi_next_xfer_pio(struct spi_master *master,
struct spi_transfer *xfer)
{
struct atmel_spi *as = spi_master_get_devdata(master);
if (as->fifo_size)
atmel_spi_next_xfer_fifo(master, xfer);
else
atmel_spi_next_xfer_single(master, xfer);
}
/* /*
* Submit next transfer for DMA. * Submit next transfer for DMA.
*/ */
...@@ -839,13 +1025,8 @@ static void atmel_spi_disable_pdc_transfer(struct atmel_spi *as) ...@@ -839,13 +1025,8 @@ static void atmel_spi_disable_pdc_transfer(struct atmel_spi *as)
spi_writel(as, PTCR, SPI_BIT(RXTDIS) | SPI_BIT(TXTDIS)); spi_writel(as, PTCR, SPI_BIT(RXTDIS) | SPI_BIT(TXTDIS));
} }
/* Called from IRQ
*
* Must update "current_remaining_bytes" to keep track of data
* to transfer.
*/
static void static void
atmel_spi_pump_pio_data(struct atmel_spi *as, struct spi_transfer *xfer) atmel_spi_pump_single_data(struct atmel_spi *as, struct spi_transfer *xfer)
{ {
u8 *rxp; u8 *rxp;
u16 *rxp16; u16 *rxp16;
...@@ -872,6 +1053,57 @@ atmel_spi_pump_pio_data(struct atmel_spi *as, struct spi_transfer *xfer) ...@@ -872,6 +1053,57 @@ atmel_spi_pump_pio_data(struct atmel_spi *as, struct spi_transfer *xfer)
} }
} }
static void
atmel_spi_pump_fifo_data(struct atmel_spi *as, struct spi_transfer *xfer)
{
u32 fifolr = spi_readl(as, FLR);
u32 num_bytes, num_data = SPI_BFEXT(RXFL, fifolr);
u32 offset = xfer->len - as->current_remaining_bytes;
u16 *words = (u16 *)((u8 *)xfer->rx_buf + offset);
u8 *bytes = (u8 *)((u8 *)xfer->rx_buf + offset);
u16 rd; /* RD field is the lowest 16 bits of RDR */
/* Update the number of remaining bytes to transfer */
num_bytes = ((xfer->bits_per_word > 8) ?
(num_data << 1) :
num_data);
if (as->current_remaining_bytes > num_bytes)
as->current_remaining_bytes -= num_bytes;
else
as->current_remaining_bytes = 0;
/* Handle odd number of bytes when data are more than 8bit width */
if (xfer->bits_per_word > 8)
as->current_remaining_bytes &= ~0x1;
/* Read data */
while (num_data) {
rd = spi_readl(as, RDR);
if (xfer->rx_buf) {
if (xfer->bits_per_word > 8)
*words++ = rd;
else
*bytes++ = rd;
}
num_data--;
}
}
/* Called from IRQ
*
* Must update "current_remaining_bytes" to keep track of data
* to transfer.
*/
static void
atmel_spi_pump_pio_data(struct atmel_spi *as, struct spi_transfer *xfer)
{
if (as->fifo_size)
atmel_spi_pump_fifo_data(as, xfer);
else
atmel_spi_pump_single_data(as, xfer);
}
/* Interrupt /* Interrupt
* *
* No need for locking in this Interrupt handler: done_status is the * No need for locking in this Interrupt handler: done_status is the
...@@ -912,7 +1144,7 @@ atmel_spi_pio_interrupt(int irq, void *dev_id) ...@@ -912,7 +1144,7 @@ atmel_spi_pio_interrupt(int irq, void *dev_id)
complete(&as->xfer_completion); complete(&as->xfer_completion);
} else if (pending & SPI_BIT(RDRF)) { } else if (pending & (SPI_BIT(RDRF) | SPI_BIT(RXFTHF))) {
atmel_spi_lock(as); atmel_spi_lock(as);
if (as->current_remaining_bytes) { if (as->current_remaining_bytes) {
...@@ -996,6 +1228,8 @@ static int atmel_spi_setup(struct spi_device *spi) ...@@ -996,6 +1228,8 @@ static int atmel_spi_setup(struct spi_device *spi)
csr |= SPI_BIT(CPOL); csr |= SPI_BIT(CPOL);
if (!(spi->mode & SPI_CPHA)) if (!(spi->mode & SPI_CPHA))
csr |= SPI_BIT(NCPHA); csr |= SPI_BIT(NCPHA);
if (!as->use_cs_gpios)
csr |= SPI_BIT(CSAAT);
/* DLYBS is mostly irrelevant since we manage chipselect using GPIOs. /* DLYBS is mostly irrelevant since we manage chipselect using GPIOs.
* *
...@@ -1009,7 +1243,9 @@ static int atmel_spi_setup(struct spi_device *spi) ...@@ -1009,7 +1243,9 @@ static int atmel_spi_setup(struct spi_device *spi)
/* chipselect must have been muxed as GPIO (e.g. in board setup) */ /* chipselect must have been muxed as GPIO (e.g. in board setup) */
npcs_pin = (unsigned long)spi->controller_data; npcs_pin = (unsigned long)spi->controller_data;
if (gpio_is_valid(spi->cs_gpio)) if (!as->use_cs_gpios)
npcs_pin = spi->chip_select;
else if (gpio_is_valid(spi->cs_gpio))
npcs_pin = spi->cs_gpio; npcs_pin = spi->cs_gpio;
asd = spi->controller_state; asd = spi->controller_state;
...@@ -1018,15 +1254,19 @@ static int atmel_spi_setup(struct spi_device *spi) ...@@ -1018,15 +1254,19 @@ static int atmel_spi_setup(struct spi_device *spi)
if (!asd) if (!asd)
return -ENOMEM; return -ENOMEM;
ret = gpio_request(npcs_pin, dev_name(&spi->dev)); if (as->use_cs_gpios) {
if (ret) { ret = gpio_request(npcs_pin, dev_name(&spi->dev));
kfree(asd); if (ret) {
return ret; kfree(asd);
return ret;
}
gpio_direction_output(npcs_pin,
!(spi->mode & SPI_CS_HIGH));
} }
asd->npcs_pin = npcs_pin; asd->npcs_pin = npcs_pin;
spi->controller_state = asd; spi->controller_state = asd;
gpio_direction_output(npcs_pin, !(spi->mode & SPI_CS_HIGH));
} }
asd->csr = csr; asd->csr = csr;
...@@ -1338,6 +1578,13 @@ static int atmel_spi_probe(struct platform_device *pdev) ...@@ -1338,6 +1578,13 @@ static int atmel_spi_probe(struct platform_device *pdev)
atmel_get_caps(as); atmel_get_caps(as);
as->use_cs_gpios = true;
if (atmel_spi_is_v2(as) &&
!of_get_property(pdev->dev.of_node, "cs-gpios", NULL)) {
as->use_cs_gpios = false;
master->num_chipselect = 4;
}
as->use_dma = false; as->use_dma = false;
as->use_pdc = false; as->use_pdc = false;
if (as->caps.has_dma_support) { if (as->caps.has_dma_support) {
...@@ -1380,6 +1627,13 @@ static int atmel_spi_probe(struct platform_device *pdev) ...@@ -1380,6 +1627,13 @@ static int atmel_spi_probe(struct platform_device *pdev)
spi_writel(as, PTCR, SPI_BIT(RXTDIS) | SPI_BIT(TXTDIS)); spi_writel(as, PTCR, SPI_BIT(RXTDIS) | SPI_BIT(TXTDIS));
spi_writel(as, CR, SPI_BIT(SPIEN)); spi_writel(as, CR, SPI_BIT(SPIEN));
as->fifo_size = 0;
if (!of_property_read_u32(pdev->dev.of_node, "atmel,fifo-size",
&as->fifo_size)) {
dev_info(&pdev->dev, "Using FIFO (%u data)\n", as->fifo_size);
spi_writel(as, CR, SPI_BIT(FIFOEN));
}
/* go! */ /* go! */
dev_info(&pdev->dev, "Atmel SPI Controller at 0x%08lx (irq %d)\n", dev_info(&pdev->dev, "Atmel SPI Controller at 0x%08lx (irq %d)\n",
(unsigned long)regs->start, irq); (unsigned long)regs->start, irq);
......
...@@ -20,18 +20,22 @@ ...@@ -20,18 +20,22 @@
* GNU General Public License for more details. * GNU General Public License for more details.
*/ */
#include <asm/page.h>
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/completion.h> #include <linux/completion.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_irq.h> #include <linux/of_address.h>
#include <linux/of_gpio.h>
#include <linux/of_device.h> #include <linux/of_device.h>
#include <linux/of_gpio.h>
#include <linux/of_irq.h>
#include <linux/spi/spi.h> #include <linux/spi/spi.h>
/* SPI register offsets */ /* SPI register offsets */
...@@ -69,7 +73,8 @@ ...@@ -69,7 +73,8 @@
#define BCM2835_SPI_CS_CS_01 0x00000001 #define BCM2835_SPI_CS_CS_01 0x00000001
#define BCM2835_SPI_POLLING_LIMIT_US 30 #define BCM2835_SPI_POLLING_LIMIT_US 30
#define BCM2835_SPI_TIMEOUT_MS 30000 #define BCM2835_SPI_POLLING_JIFFIES 2
#define BCM2835_SPI_DMA_MIN_LENGTH 96
#define BCM2835_SPI_MODE_BITS (SPI_CPOL | SPI_CPHA | SPI_CS_HIGH \ #define BCM2835_SPI_MODE_BITS (SPI_CPOL | SPI_CPHA | SPI_CS_HIGH \
| SPI_NO_CS | SPI_3WIRE) | SPI_NO_CS | SPI_3WIRE)
...@@ -83,6 +88,7 @@ struct bcm2835_spi { ...@@ -83,6 +88,7 @@ struct bcm2835_spi {
u8 *rx_buf; u8 *rx_buf;
int tx_len; int tx_len;
int rx_len; int rx_len;
bool dma_pending;
}; };
static inline u32 bcm2835_rd(struct bcm2835_spi *bs, unsigned reg) static inline u32 bcm2835_rd(struct bcm2835_spi *bs, unsigned reg)
...@@ -128,12 +134,15 @@ static void bcm2835_spi_reset_hw(struct spi_master *master) ...@@ -128,12 +134,15 @@ static void bcm2835_spi_reset_hw(struct spi_master *master)
/* Disable SPI interrupts and transfer */ /* Disable SPI interrupts and transfer */
cs &= ~(BCM2835_SPI_CS_INTR | cs &= ~(BCM2835_SPI_CS_INTR |
BCM2835_SPI_CS_INTD | BCM2835_SPI_CS_INTD |
BCM2835_SPI_CS_DMAEN |
BCM2835_SPI_CS_TA); BCM2835_SPI_CS_TA);
/* and reset RX/TX FIFOS */ /* and reset RX/TX FIFOS */
cs |= BCM2835_SPI_CS_CLEAR_RX | BCM2835_SPI_CS_CLEAR_TX; cs |= BCM2835_SPI_CS_CLEAR_RX | BCM2835_SPI_CS_CLEAR_TX;
/* and reset the SPI_HW */ /* and reset the SPI_HW */
bcm2835_wr(bs, BCM2835_SPI_CS, cs); bcm2835_wr(bs, BCM2835_SPI_CS, cs);
/* as well as DLEN */
bcm2835_wr(bs, BCM2835_SPI_DLEN, 0);
} }
static irqreturn_t bcm2835_spi_interrupt(int irq, void *dev_id) static irqreturn_t bcm2835_spi_interrupt(int irq, void *dev_id)
...@@ -157,42 +166,6 @@ static irqreturn_t bcm2835_spi_interrupt(int irq, void *dev_id) ...@@ -157,42 +166,6 @@ static irqreturn_t bcm2835_spi_interrupt(int irq, void *dev_id)
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static int bcm2835_spi_transfer_one_poll(struct spi_master *master,
struct spi_device *spi,
struct spi_transfer *tfr,
u32 cs,
unsigned long xfer_time_us)
{
struct bcm2835_spi *bs = spi_master_get_devdata(master);
/* set timeout to 1 second of maximum polling */
unsigned long timeout = jiffies + HZ;
/* enable HW block without interrupts */
bcm2835_wr(bs, BCM2835_SPI_CS, cs | BCM2835_SPI_CS_TA);
/* loop until finished the transfer */
while (bs->rx_len) {
/* read from fifo as much as possible */
bcm2835_rd_fifo(bs);
/* fill in tx fifo as much as possible */
bcm2835_wr_fifo(bs);
/* if we still expect some data after the read,
* check for a possible timeout
*/
if (bs->rx_len && time_after(jiffies, timeout)) {
/* Transfer complete - reset SPI HW */
bcm2835_spi_reset_hw(master);
/* and return timeout */
return -ETIMEDOUT;
}
}
/* Transfer complete - reset SPI HW */
bcm2835_spi_reset_hw(master);
/* and return without waiting for completion */
return 0;
}
static int bcm2835_spi_transfer_one_irq(struct spi_master *master, static int bcm2835_spi_transfer_one_irq(struct spi_master *master,
struct spi_device *spi, struct spi_device *spi,
struct spi_transfer *tfr, struct spi_transfer *tfr,
...@@ -229,6 +202,329 @@ static int bcm2835_spi_transfer_one_irq(struct spi_master *master, ...@@ -229,6 +202,329 @@ static int bcm2835_spi_transfer_one_irq(struct spi_master *master,
return 1; return 1;
} }
/*
* DMA support
*
* this implementation has currently a few issues in so far as it does
* not work arrount limitations of the HW.
*
* the main one being that DMA transfers are limited to 16 bit
* (so 0 to 65535 bytes) by the SPI HW due to BCM2835_SPI_DLEN
*
* also we currently assume that the scatter-gather fragments are
* all multiple of 4 (except the last) - otherwise we would need
* to reset the FIFO before subsequent transfers...
* this also means that tx/rx transfers sg's need to be of equal size!
*
* there may be a few more border-cases we may need to address as well
* but unfortunately this would mean splitting up the scatter-gather
* list making it slightly unpractical...
*/
static void bcm2835_spi_dma_done(void *data)
{
struct spi_master *master = data;
struct bcm2835_spi *bs = spi_master_get_devdata(master);
/* reset fifo and HW */
bcm2835_spi_reset_hw(master);
/* and terminate tx-dma as we do not have an irq for it
* because when the rx dma will terminate and this callback
* is called the tx-dma must have finished - can't get to this
* situation otherwise...
*/
dmaengine_terminate_all(master->dma_tx);
/* mark as no longer pending */
bs->dma_pending = 0;
/* and mark as completed */;
complete(&master->xfer_completion);
}
static int bcm2835_spi_prepare_sg(struct spi_master *master,
struct spi_transfer *tfr,
bool is_tx)
{
struct dma_chan *chan;
struct scatterlist *sgl;
unsigned int nents;
enum dma_transfer_direction dir;
unsigned long flags;
struct dma_async_tx_descriptor *desc;
dma_cookie_t cookie;
if (is_tx) {
dir = DMA_MEM_TO_DEV;
chan = master->dma_tx;
nents = tfr->tx_sg.nents;
sgl = tfr->tx_sg.sgl;
flags = 0 /* no tx interrupt */;
} else {
dir = DMA_DEV_TO_MEM;
chan = master->dma_rx;
nents = tfr->rx_sg.nents;
sgl = tfr->rx_sg.sgl;
flags = DMA_PREP_INTERRUPT;
}
/* prepare the channel */
desc = dmaengine_prep_slave_sg(chan, sgl, nents, dir, flags);
if (!desc)
return -EINVAL;
/* set callback for rx */
if (!is_tx) {
desc->callback = bcm2835_spi_dma_done;
desc->callback_param = master;
}
/* submit it to DMA-engine */
cookie = dmaengine_submit(desc);
return dma_submit_error(cookie);
}
static inline int bcm2835_check_sg_length(struct sg_table *sgt)
{
int i;
struct scatterlist *sgl;
/* check that the sg entries are word-sized (except for last) */
for_each_sg(sgt->sgl, sgl, (int)sgt->nents - 1, i) {
if (sg_dma_len(sgl) % 4)
return -EFAULT;
}
return 0;
}
static int bcm2835_spi_transfer_one_dma(struct spi_master *master,
struct spi_device *spi,
struct spi_transfer *tfr,
u32 cs)
{
struct bcm2835_spi *bs = spi_master_get_devdata(master);
int ret;
/* check that the scatter gather segments are all a multiple of 4 */
if (bcm2835_check_sg_length(&tfr->tx_sg) ||
bcm2835_check_sg_length(&tfr->rx_sg)) {
dev_warn_once(&spi->dev,
"scatter gather segment length is not a multiple of 4 - falling back to interrupt mode\n");
return bcm2835_spi_transfer_one_irq(master, spi, tfr, cs);
}
/* setup tx-DMA */
ret = bcm2835_spi_prepare_sg(master, tfr, true);
if (ret)
return ret;
/* start TX early */
dma_async_issue_pending(master->dma_tx);
/* mark as dma pending */
bs->dma_pending = 1;
/* set the DMA length */
bcm2835_wr(bs, BCM2835_SPI_DLEN, tfr->len);
/* start the HW */
bcm2835_wr(bs, BCM2835_SPI_CS,
cs | BCM2835_SPI_CS_TA | BCM2835_SPI_CS_DMAEN);
/* setup rx-DMA late - to run transfers while
* mapping of the rx buffers still takes place
* this saves 10us or more.
*/
ret = bcm2835_spi_prepare_sg(master, tfr, false);
if (ret) {
/* need to reset on errors */
dmaengine_terminate_all(master->dma_tx);
bcm2835_spi_reset_hw(master);
return ret;
}
/* start rx dma late */
dma_async_issue_pending(master->dma_rx);
/* wait for wakeup in framework */
return 1;
}
static bool bcm2835_spi_can_dma(struct spi_master *master,
struct spi_device *spi,
struct spi_transfer *tfr)
{
/* only run for gpio_cs */
if (!gpio_is_valid(spi->cs_gpio))
return false;
/* we start DMA efforts only on bigger transfers */
if (tfr->len < BCM2835_SPI_DMA_MIN_LENGTH)
return false;
/* BCM2835_SPI_DLEN has defined a max transfer size as
* 16 bit, so max is 65535
* we can revisit this by using an alternative transfer
* method - ideally this would get done without any more
* interaction...
*/
if (tfr->len > 65535) {
dev_warn_once(&spi->dev,
"transfer size of %d too big for dma-transfer\n",
tfr->len);
return false;
}
/* if we run rx/tx_buf with word aligned addresses then we are OK */
if ((((size_t)tfr->rx_buf & 3) == 0) &&
(((size_t)tfr->tx_buf & 3) == 0))
return true;
/* otherwise we only allow transfers within the same page
* to avoid wasting time on dma_mapping when it is not practical
*/
if (((size_t)tfr->tx_buf & PAGE_MASK) + tfr->len > PAGE_SIZE) {
dev_warn_once(&spi->dev,
"Unaligned spi tx-transfer bridging page\n");
return false;
}
if (((size_t)tfr->rx_buf & PAGE_MASK) + tfr->len > PAGE_SIZE) {
dev_warn_once(&spi->dev,
"Unaligned spi tx-transfer bridging page\n");
return false;
}
/* return OK */
return true;
}
static void bcm2835_dma_release(struct spi_master *master)
{
if (master->dma_tx) {
dmaengine_terminate_all(master->dma_tx);
dma_release_channel(master->dma_tx);
master->dma_tx = NULL;
}
if (master->dma_rx) {
dmaengine_terminate_all(master->dma_rx);
dma_release_channel(master->dma_rx);
master->dma_rx = NULL;
}
}
static void bcm2835_dma_init(struct spi_master *master, struct device *dev)
{
struct dma_slave_config slave_config;
const __be32 *addr;
dma_addr_t dma_reg_base;
int ret;
/* base address in dma-space */
addr = of_get_address(master->dev.of_node, 0, NULL, NULL);
if (!addr) {
dev_err(dev, "could not get DMA-register address - not using dma mode\n");
goto err;
}
dma_reg_base = be32_to_cpup(addr);
/* get tx/rx dma */
master->dma_tx = dma_request_slave_channel(dev, "tx");
if (!master->dma_tx) {
dev_err(dev, "no tx-dma configuration found - not using dma mode\n");
goto err;
}
master->dma_rx = dma_request_slave_channel(dev, "rx");
if (!master->dma_rx) {
dev_err(dev, "no rx-dma configuration found - not using dma mode\n");
goto err_release;
}
/* configure DMAs */
slave_config.direction = DMA_MEM_TO_DEV;
slave_config.dst_addr = (u32)(dma_reg_base + BCM2835_SPI_FIFO);
slave_config.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
ret = dmaengine_slave_config(master->dma_tx, &slave_config);
if (ret)
goto err_config;
slave_config.direction = DMA_DEV_TO_MEM;
slave_config.src_addr = (u32)(dma_reg_base + BCM2835_SPI_FIFO);
slave_config.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
ret = dmaengine_slave_config(master->dma_rx, &slave_config);
if (ret)
goto err_config;
/* all went well, so set can_dma */
master->can_dma = bcm2835_spi_can_dma;
master->max_dma_len = 65535; /* limitation by BCM2835_SPI_DLEN */
/* need to do TX AND RX DMA, so we need dummy buffers */
master->flags = SPI_MASTER_MUST_RX | SPI_MASTER_MUST_TX;
return;
err_config:
dev_err(dev, "issue configuring dma: %d - not using DMA mode\n",
ret);
err_release:
bcm2835_dma_release(master);
err:
return;
}
static int bcm2835_spi_transfer_one_poll(struct spi_master *master,
struct spi_device *spi,
struct spi_transfer *tfr,
u32 cs,
unsigned long xfer_time_us)
{
struct bcm2835_spi *bs = spi_master_get_devdata(master);
unsigned long timeout;
/* enable HW block without interrupts */
bcm2835_wr(bs, BCM2835_SPI_CS, cs | BCM2835_SPI_CS_TA);
/* fill in the fifo before timeout calculations
* if we are interrupted here, then the data is
* getting transferred by the HW while we are interrupted
*/
bcm2835_wr_fifo(bs);
/* set the timeout */
timeout = jiffies + BCM2835_SPI_POLLING_JIFFIES;
/* loop until finished the transfer */
while (bs->rx_len) {
/* fill in tx fifo with remaining data */
bcm2835_wr_fifo(bs);
/* read from fifo as much as possible */
bcm2835_rd_fifo(bs);
/* if there is still data pending to read
* then check the timeout
*/
if (bs->rx_len && time_after(jiffies, timeout)) {
dev_dbg_ratelimited(&spi->dev,
"timeout period reached: jiffies: %lu remaining tx/rx: %d/%d - falling back to interrupt mode\n",
jiffies - timeout,
bs->tx_len, bs->rx_len);
/* fall back to interrupt mode */
return bcm2835_spi_transfer_one_irq(master, spi,
tfr, cs);
}
}
/* Transfer complete - reset SPI HW */
bcm2835_spi_reset_hw(master);
/* and return without waiting for completion */
return 0;
}
static int bcm2835_spi_transfer_one(struct spi_master *master, static int bcm2835_spi_transfer_one(struct spi_master *master,
struct spi_device *spi, struct spi_device *spi,
struct spi_transfer *tfr) struct spi_transfer *tfr)
...@@ -288,12 +584,26 @@ static int bcm2835_spi_transfer_one(struct spi_master *master, ...@@ -288,12 +584,26 @@ static int bcm2835_spi_transfer_one(struct spi_master *master,
return bcm2835_spi_transfer_one_poll(master, spi, tfr, return bcm2835_spi_transfer_one_poll(master, spi, tfr,
cs, xfer_time_us); cs, xfer_time_us);
/* run in dma mode if conditions are right */
if (master->can_dma && bcm2835_spi_can_dma(master, spi, tfr))
return bcm2835_spi_transfer_one_dma(master, spi, tfr, cs);
/* run in interrupt-mode */
return bcm2835_spi_transfer_one_irq(master, spi, tfr, cs); return bcm2835_spi_transfer_one_irq(master, spi, tfr, cs);
} }
static void bcm2835_spi_handle_err(struct spi_master *master, static void bcm2835_spi_handle_err(struct spi_master *master,
struct spi_message *msg) struct spi_message *msg)
{ {
struct bcm2835_spi *bs = spi_master_get_devdata(master);
/* if an error occurred and we have an active dma, then terminate */
if (bs->dma_pending) {
dmaengine_terminate_all(master->dma_tx);
dmaengine_terminate_all(master->dma_rx);
bs->dma_pending = 0;
}
/* and reset */
bcm2835_spi_reset_hw(master); bcm2835_spi_reset_hw(master);
} }
...@@ -463,6 +773,8 @@ static int bcm2835_spi_probe(struct platform_device *pdev) ...@@ -463,6 +773,8 @@ static int bcm2835_spi_probe(struct platform_device *pdev)
goto out_clk_disable; goto out_clk_disable;
} }
bcm2835_dma_init(master, &pdev->dev);
/* initialise the hardware with the default polarities */ /* initialise the hardware with the default polarities */
bcm2835_wr(bs, BCM2835_SPI_CS, bcm2835_wr(bs, BCM2835_SPI_CS,
BCM2835_SPI_CS_CLEAR_RX | BCM2835_SPI_CS_CLEAR_TX); BCM2835_SPI_CS_CLEAR_RX | BCM2835_SPI_CS_CLEAR_TX);
...@@ -493,6 +805,8 @@ static int bcm2835_spi_remove(struct platform_device *pdev) ...@@ -493,6 +805,8 @@ static int bcm2835_spi_remove(struct platform_device *pdev)
clk_disable_unprepare(bs->clk); clk_disable_unprepare(bs->clk);
bcm2835_dma_release(master);
return 0; return 0;
} }
......
...@@ -265,7 +265,7 @@ static inline int davinci_spi_get_prescale(struct davinci_spi *dspi, ...@@ -265,7 +265,7 @@ static inline int davinci_spi_get_prescale(struct davinci_spi *dspi,
ret = DIV_ROUND_UP(clk_get_rate(dspi->clk), max_speed_hz); ret = DIV_ROUND_UP(clk_get_rate(dspi->clk), max_speed_hz);
if (ret < 3 || ret > 256) if (ret < 1 || ret > 256)
return -EINVAL; return -EINVAL;
return ret - 1; return ret - 1;
......
...@@ -24,6 +24,7 @@ ...@@ -24,6 +24,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_device.h> #include <linux/of_device.h>
#include <linux/pinctrl/consumer.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/regmap.h> #include <linux/regmap.h>
...@@ -47,6 +48,7 @@ ...@@ -47,6 +48,7 @@
#define SPI_MCR_CLR_RXF (1 << 10) #define SPI_MCR_CLR_RXF (1 << 10)
#define SPI_TCR 0x08 #define SPI_TCR 0x08
#define SPI_TCR_GET_TCNT(x) (((x) & 0xffff0000) >> 16)
#define SPI_CTAR(x) (0x0c + (((x) & 0x3) * 4)) #define SPI_CTAR(x) (0x0c + (((x) & 0x3) * 4))
#define SPI_CTAR_FMSZ(x) (((x) & 0x0000000f) << 27) #define SPI_CTAR_FMSZ(x) (((x) & 0x0000000f) << 27)
...@@ -67,9 +69,11 @@ ...@@ -67,9 +69,11 @@
#define SPI_SR 0x2c #define SPI_SR 0x2c
#define SPI_SR_EOQF 0x10000000 #define SPI_SR_EOQF 0x10000000
#define SPI_SR_TCFQF 0x80000000
#define SPI_RSER 0x30 #define SPI_RSER 0x30
#define SPI_RSER_EOQFE 0x10000000 #define SPI_RSER_EOQFE 0x10000000
#define SPI_RSER_TCFQE 0x80000000
#define SPI_PUSHR 0x34 #define SPI_PUSHR 0x34
#define SPI_PUSHR_CONT (1 << 31) #define SPI_PUSHR_CONT (1 << 31)
...@@ -102,12 +106,35 @@ ...@@ -102,12 +106,35 @@
#define SPI_CS_ASSERT 0x02 #define SPI_CS_ASSERT 0x02
#define SPI_CS_DROP 0x04 #define SPI_CS_DROP 0x04
#define SPI_TCR_TCNT_MAX 0x10000
struct chip_data { struct chip_data {
u32 mcr_val; u32 mcr_val;
u32 ctar_val; u32 ctar_val;
u16 void_write_data; u16 void_write_data;
}; };
enum dspi_trans_mode {
DSPI_EOQ_MODE = 0,
DSPI_TCFQ_MODE,
};
struct fsl_dspi_devtype_data {
enum dspi_trans_mode trans_mode;
};
static const struct fsl_dspi_devtype_data vf610_data = {
.trans_mode = DSPI_EOQ_MODE,
};
static const struct fsl_dspi_devtype_data ls1021a_v1_data = {
.trans_mode = DSPI_TCFQ_MODE,
};
static const struct fsl_dspi_devtype_data ls2085a_data = {
.trans_mode = DSPI_TCFQ_MODE,
};
struct fsl_dspi { struct fsl_dspi {
struct spi_master *master; struct spi_master *master;
struct platform_device *pdev; struct platform_device *pdev;
...@@ -128,9 +155,12 @@ struct fsl_dspi { ...@@ -128,9 +155,12 @@ struct fsl_dspi {
u8 cs; u8 cs;
u16 void_write_data; u16 void_write_data;
u32 cs_change; u32 cs_change;
struct fsl_dspi_devtype_data *devtype_data;
wait_queue_head_t waitq; wait_queue_head_t waitq;
u32 waitflags; u32 waitflags;
u32 spi_tcnt;
}; };
static inline int is_double_byte_mode(struct fsl_dspi *dspi) static inline int is_double_byte_mode(struct fsl_dspi *dspi)
...@@ -213,63 +243,60 @@ static void ns_delay_scale(char *psc, char *sc, int delay_ns, ...@@ -213,63 +243,60 @@ static void ns_delay_scale(char *psc, char *sc, int delay_ns,
} }
} }
static int dspi_transfer_write(struct fsl_dspi *dspi) static u32 dspi_data_to_pushr(struct fsl_dspi *dspi, int tx_word)
{ {
int tx_count = 0;
int tx_word;
u16 d16; u16 d16;
u8 d8;
u32 dspi_pushr = 0;
int first = 1;
tx_word = is_double_byte_mode(dspi); if (!(dspi->dataflags & TRAN_STATE_TX_VOID))
d16 = tx_word ? *(u16 *)dspi->tx : *(u8 *)dspi->tx;
else
d16 = dspi->void_write_data;
/* If we are in word mode, but only have a single byte to transfer dspi->tx += tx_word + 1;
* then switch to byte mode temporarily. Will switch back at the dspi->len -= tx_word + 1;
* end of the transfer.
*/
if (tx_word && (dspi->len == 1)) {
dspi->dataflags |= TRAN_STATE_WORD_ODD_NUM;
regmap_update_bits(dspi->regmap, SPI_CTAR(dspi->cs),
SPI_FRAME_BITS_MASK, SPI_FRAME_BITS(8));
tx_word = 0;
}
while (dspi->len && (tx_count < DSPI_FIFO_SIZE)) { return SPI_PUSHR_TXDATA(d16) |
if (tx_word) { SPI_PUSHR_PCS(dspi->cs) |
if (dspi->len == 1) SPI_PUSHR_CTAS(dspi->cs) |
break; SPI_PUSHR_CONT;
}
if (!(dspi->dataflags & TRAN_STATE_TX_VOID)) { static void dspi_data_from_popr(struct fsl_dspi *dspi, int rx_word)
d16 = *(u16 *)dspi->tx; {
dspi->tx += 2; u16 d;
} else { unsigned int val;
d16 = dspi->void_write_data;
}
dspi_pushr = SPI_PUSHR_TXDATA(d16) | regmap_read(dspi->regmap, SPI_POPR, &val);
SPI_PUSHR_PCS(dspi->cs) | d = SPI_POPR_RXDATA(val);
SPI_PUSHR_CTAS(dspi->cs) |
SPI_PUSHR_CONT;
dspi->len -= 2; if (!(dspi->dataflags & TRAN_STATE_RX_VOID))
} else { rx_word ? (*(u16 *)dspi->rx = d) : (*(u8 *)dspi->rx = d);
if (!(dspi->dataflags & TRAN_STATE_TX_VOID)) {
d8 = *(u8 *)dspi->tx; dspi->rx += rx_word + 1;
dspi->tx++; }
} else {
d8 = (u8)dspi->void_write_data;
}
dspi_pushr = SPI_PUSHR_TXDATA(d8) | static int dspi_eoq_write(struct fsl_dspi *dspi)
SPI_PUSHR_PCS(dspi->cs) | {
SPI_PUSHR_CTAS(dspi->cs) | int tx_count = 0;
SPI_PUSHR_CONT; int tx_word;
u32 dspi_pushr = 0;
tx_word = is_double_byte_mode(dspi);
dspi->len--; while (dspi->len && (tx_count < DSPI_FIFO_SIZE)) {
/* If we are in word mode, only have a single byte to transfer
* switch to byte mode temporarily. Will switch back at the
* end of the transfer.
*/
if (tx_word && (dspi->len == 1)) {
dspi->dataflags |= TRAN_STATE_WORD_ODD_NUM;
regmap_update_bits(dspi->regmap, SPI_CTAR(dspi->cs),
SPI_FRAME_BITS_MASK, SPI_FRAME_BITS(8));
tx_word = 0;
} }
dspi_pushr = dspi_data_to_pushr(dspi, tx_word);
if (dspi->len == 0 || tx_count == DSPI_FIFO_SIZE - 1) { if (dspi->len == 0 || tx_count == DSPI_FIFO_SIZE - 1) {
/* last transfer in the transfer */ /* last transfer in the transfer */
dspi_pushr |= SPI_PUSHR_EOQ; dspi_pushr |= SPI_PUSHR_EOQ;
...@@ -278,11 +305,6 @@ static int dspi_transfer_write(struct fsl_dspi *dspi) ...@@ -278,11 +305,6 @@ static int dspi_transfer_write(struct fsl_dspi *dspi)
} else if (tx_word && (dspi->len == 1)) } else if (tx_word && (dspi->len == 1))
dspi_pushr |= SPI_PUSHR_EOQ; dspi_pushr |= SPI_PUSHR_EOQ;
if (first) {
first = 0;
dspi_pushr |= SPI_PUSHR_CTCNT; /* clear counter */
}
regmap_write(dspi->regmap, SPI_PUSHR, dspi_pushr); regmap_write(dspi->regmap, SPI_PUSHR, dspi_pushr);
tx_count++; tx_count++;
...@@ -291,40 +313,55 @@ static int dspi_transfer_write(struct fsl_dspi *dspi) ...@@ -291,40 +313,55 @@ static int dspi_transfer_write(struct fsl_dspi *dspi)
return tx_count * (tx_word + 1); return tx_count * (tx_word + 1);
} }
static int dspi_transfer_read(struct fsl_dspi *dspi) static int dspi_eoq_read(struct fsl_dspi *dspi)
{ {
int rx_count = 0; int rx_count = 0;
int rx_word = is_double_byte_mode(dspi); int rx_word = is_double_byte_mode(dspi);
u16 d;
while ((dspi->rx < dspi->rx_end) while ((dspi->rx < dspi->rx_end)
&& (rx_count < DSPI_FIFO_SIZE)) { && (rx_count < DSPI_FIFO_SIZE)) {
if (rx_word) { if (rx_word && (dspi->rx_end - dspi->rx) == 1)
unsigned int val; rx_word = 0;
if ((dspi->rx_end - dspi->rx) == 1) dspi_data_from_popr(dspi, rx_word);
break; rx_count++;
}
regmap_read(dspi->regmap, SPI_POPR, &val); return rx_count;
d = SPI_POPR_RXDATA(val); }
if (!(dspi->dataflags & TRAN_STATE_RX_VOID)) static int dspi_tcfq_write(struct fsl_dspi *dspi)
*(u16 *)dspi->rx = d; {
dspi->rx += 2; int tx_word;
u32 dspi_pushr = 0;
} else { tx_word = is_double_byte_mode(dspi);
unsigned int val;
regmap_read(dspi->regmap, SPI_POPR, &val); if (tx_word && (dspi->len == 1)) {
d = SPI_POPR_RXDATA(val); dspi->dataflags |= TRAN_STATE_WORD_ODD_NUM;
if (!(dspi->dataflags & TRAN_STATE_RX_VOID)) regmap_update_bits(dspi->regmap, SPI_CTAR(dspi->cs),
*(u8 *)dspi->rx = d; SPI_FRAME_BITS_MASK, SPI_FRAME_BITS(8));
dspi->rx++; tx_word = 0;
}
rx_count++;
} }
return rx_count; dspi_pushr = dspi_data_to_pushr(dspi, tx_word);
if ((dspi->cs_change) && (!dspi->len))
dspi_pushr &= ~SPI_PUSHR_CONT;
regmap_write(dspi->regmap, SPI_PUSHR, dspi_pushr);
return tx_word + 1;
}
static void dspi_tcfq_read(struct fsl_dspi *dspi)
{
int rx_word = is_double_byte_mode(dspi);
if (rx_word && (dspi->rx_end - dspi->rx) == 1)
rx_word = 0;
dspi_data_from_popr(dspi, rx_word);
} }
static int dspi_transfer_one_message(struct spi_master *master, static int dspi_transfer_one_message(struct spi_master *master,
...@@ -334,6 +371,12 @@ static int dspi_transfer_one_message(struct spi_master *master, ...@@ -334,6 +371,12 @@ static int dspi_transfer_one_message(struct spi_master *master,
struct spi_device *spi = message->spi; struct spi_device *spi = message->spi;
struct spi_transfer *transfer; struct spi_transfer *transfer;
int status = 0; int status = 0;
enum dspi_trans_mode trans_mode;
u32 spi_tcr;
regmap_read(dspi->regmap, SPI_TCR, &spi_tcr);
dspi->spi_tcnt = SPI_TCR_GET_TCNT(spi_tcr);
message->actual_length = 0; message->actual_length = 0;
list_for_each_entry(transfer, &message->transfers, transfer_list) { list_for_each_entry(transfer, &message->transfers, transfer_list) {
...@@ -341,10 +384,10 @@ static int dspi_transfer_one_message(struct spi_master *master, ...@@ -341,10 +384,10 @@ static int dspi_transfer_one_message(struct spi_master *master,
dspi->cur_msg = message; dspi->cur_msg = message;
dspi->cur_chip = spi_get_ctldata(spi); dspi->cur_chip = spi_get_ctldata(spi);
dspi->cs = spi->chip_select; dspi->cs = spi->chip_select;
dspi->cs_change = 0;
if (dspi->cur_transfer->transfer_list.next if (dspi->cur_transfer->transfer_list.next
== &dspi->cur_msg->transfers) == &dspi->cur_msg->transfers)
transfer->cs_change = 1; dspi->cs_change = 1;
dspi->cs_change = transfer->cs_change;
dspi->void_write_data = dspi->cur_chip->void_write_data; dspi->void_write_data = dspi->cur_chip->void_write_data;
dspi->dataflags = 0; dspi->dataflags = 0;
...@@ -370,8 +413,22 @@ static int dspi_transfer_one_message(struct spi_master *master, ...@@ -370,8 +413,22 @@ static int dspi_transfer_one_message(struct spi_master *master,
regmap_write(dspi->regmap, SPI_CTAR(dspi->cs), regmap_write(dspi->regmap, SPI_CTAR(dspi->cs),
dspi->cur_chip->ctar_val); dspi->cur_chip->ctar_val);
regmap_write(dspi->regmap, SPI_RSER, SPI_RSER_EOQFE); trans_mode = dspi->devtype_data->trans_mode;
message->actual_length += dspi_transfer_write(dspi); switch (trans_mode) {
case DSPI_EOQ_MODE:
regmap_write(dspi->regmap, SPI_RSER, SPI_RSER_EOQFE);
dspi_eoq_write(dspi);
break;
case DSPI_TCFQ_MODE:
regmap_write(dspi->regmap, SPI_RSER, SPI_RSER_TCFQE);
dspi_tcfq_write(dspi);
break;
default:
dev_err(&dspi->pdev->dev, "unsupported trans_mode %u\n",
trans_mode);
status = -EINVAL;
goto out;
}
if (wait_event_interruptible(dspi->waitq, dspi->waitflags)) if (wait_event_interruptible(dspi->waitq, dspi->waitflags))
dev_err(&dspi->pdev->dev, "wait transfer complete fail!\n"); dev_err(&dspi->pdev->dev, "wait transfer complete fail!\n");
...@@ -381,6 +438,7 @@ static int dspi_transfer_one_message(struct spi_master *master, ...@@ -381,6 +438,7 @@ static int dspi_transfer_one_message(struct spi_master *master,
udelay(transfer->delay_usecs); udelay(transfer->delay_usecs);
} }
out:
message->status = status; message->status = status;
spi_finalize_current_message(master); spi_finalize_current_message(master);
...@@ -460,27 +518,89 @@ static void dspi_cleanup(struct spi_device *spi) ...@@ -460,27 +518,89 @@ static void dspi_cleanup(struct spi_device *spi)
static irqreturn_t dspi_interrupt(int irq, void *dev_id) static irqreturn_t dspi_interrupt(int irq, void *dev_id)
{ {
struct fsl_dspi *dspi = (struct fsl_dspi *)dev_id; struct fsl_dspi *dspi = (struct fsl_dspi *)dev_id;
struct spi_message *msg = dspi->cur_msg; struct spi_message *msg = dspi->cur_msg;
enum dspi_trans_mode trans_mode;
u32 spi_sr, spi_tcr;
u32 spi_tcnt, tcnt_diff;
int tx_word;
regmap_write(dspi->regmap, SPI_SR, SPI_SR_EOQF); regmap_read(dspi->regmap, SPI_SR, &spi_sr);
dspi_transfer_read(dspi); regmap_write(dspi->regmap, SPI_SR, spi_sr);
if (!dspi->len) {
if (spi_sr & (SPI_SR_EOQF | SPI_SR_TCFQF)) {
tx_word = is_double_byte_mode(dspi);
regmap_read(dspi->regmap, SPI_TCR, &spi_tcr);
spi_tcnt = SPI_TCR_GET_TCNT(spi_tcr);
/*
* The width of SPI Transfer Counter in SPI_TCR is 16bits,
* so the max couner is 65535. When the counter reach 65535,
* it will wrap around, counter reset to zero.
* spi_tcnt my be less than dspi->spi_tcnt, it means the
* counter already wrapped around.
* SPI Transfer Counter is a counter of transmitted frames.
* The size of frame maybe two bytes.
*/
tcnt_diff = ((spi_tcnt + SPI_TCR_TCNT_MAX) - dspi->spi_tcnt)
% SPI_TCR_TCNT_MAX;
tcnt_diff *= (tx_word + 1);
if (dspi->dataflags & TRAN_STATE_WORD_ODD_NUM) if (dspi->dataflags & TRAN_STATE_WORD_ODD_NUM)
regmap_update_bits(dspi->regmap, SPI_CTAR(dspi->cs), tcnt_diff--;
SPI_FRAME_BITS_MASK, SPI_FRAME_BITS(16));
msg->actual_length += tcnt_diff;
dspi->spi_tcnt = spi_tcnt;
trans_mode = dspi->devtype_data->trans_mode;
switch (trans_mode) {
case DSPI_EOQ_MODE:
dspi_eoq_read(dspi);
break;
case DSPI_TCFQ_MODE:
dspi_tcfq_read(dspi);
break;
default:
dev_err(&dspi->pdev->dev, "unsupported trans_mode %u\n",
trans_mode);
return IRQ_HANDLED;
}
dspi->waitflags = 1; if (!dspi->len) {
wake_up_interruptible(&dspi->waitq); if (dspi->dataflags & TRAN_STATE_WORD_ODD_NUM) {
} else regmap_update_bits(dspi->regmap,
msg->actual_length += dspi_transfer_write(dspi); SPI_CTAR(dspi->cs),
SPI_FRAME_BITS_MASK,
SPI_FRAME_BITS(16));
dspi->dataflags &= ~TRAN_STATE_WORD_ODD_NUM;
}
dspi->waitflags = 1;
wake_up_interruptible(&dspi->waitq);
} else {
switch (trans_mode) {
case DSPI_EOQ_MODE:
dspi_eoq_write(dspi);
break;
case DSPI_TCFQ_MODE:
dspi_tcfq_write(dspi);
break;
default:
dev_err(&dspi->pdev->dev,
"unsupported trans_mode %u\n",
trans_mode);
}
}
}
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static const struct of_device_id fsl_dspi_dt_ids[] = { static const struct of_device_id fsl_dspi_dt_ids[] = {
{ .compatible = "fsl,vf610-dspi", .data = NULL, }, { .compatible = "fsl,vf610-dspi", .data = (void *)&vf610_data, },
{ .compatible = "fsl,ls1021a-v1.0-dspi",
.data = (void *)&ls1021a_v1_data, },
{ .compatible = "fsl,ls2085a-dspi", .data = (void *)&ls2085a_data, },
{ /* sentinel */ } { /* sentinel */ }
}; };
MODULE_DEVICE_TABLE(of, fsl_dspi_dt_ids); MODULE_DEVICE_TABLE(of, fsl_dspi_dt_ids);
...@@ -494,6 +614,8 @@ static int dspi_suspend(struct device *dev) ...@@ -494,6 +614,8 @@ static int dspi_suspend(struct device *dev)
spi_master_suspend(master); spi_master_suspend(master);
clk_disable_unprepare(dspi->clk); clk_disable_unprepare(dspi->clk);
pinctrl_pm_select_sleep_state(dev);
return 0; return 0;
} }
...@@ -502,6 +624,8 @@ static int dspi_resume(struct device *dev) ...@@ -502,6 +624,8 @@ static int dspi_resume(struct device *dev)
struct spi_master *master = dev_get_drvdata(dev); struct spi_master *master = dev_get_drvdata(dev);
struct fsl_dspi *dspi = spi_master_get_devdata(master); struct fsl_dspi *dspi = spi_master_get_devdata(master);
pinctrl_pm_select_default_state(dev);
clk_prepare_enable(dspi->clk); clk_prepare_enable(dspi->clk);
spi_master_resume(master); spi_master_resume(master);
...@@ -526,6 +650,8 @@ static int dspi_probe(struct platform_device *pdev) ...@@ -526,6 +650,8 @@ static int dspi_probe(struct platform_device *pdev)
struct resource *res; struct resource *res;
void __iomem *base; void __iomem *base;
int ret = 0, cs_num, bus_num; int ret = 0, cs_num, bus_num;
const struct of_device_id *of_id =
of_match_device(fsl_dspi_dt_ids, &pdev->dev);
master = spi_alloc_master(&pdev->dev, sizeof(struct fsl_dspi)); master = spi_alloc_master(&pdev->dev, sizeof(struct fsl_dspi));
if (!master) if (!master)
...@@ -559,6 +685,13 @@ static int dspi_probe(struct platform_device *pdev) ...@@ -559,6 +685,13 @@ static int dspi_probe(struct platform_device *pdev)
} }
master->bus_num = bus_num; master->bus_num = bus_num;
dspi->devtype_data = (struct fsl_dspi_devtype_data *)of_id->data;
if (!dspi->devtype_data) {
dev_err(&pdev->dev, "can't get devtype_data\n");
ret = -EFAULT;
goto out_master_put;
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
base = devm_ioremap_resource(&pdev->dev, res); base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(base)) { if (IS_ERR(base)) {
...@@ -566,7 +699,7 @@ static int dspi_probe(struct platform_device *pdev) ...@@ -566,7 +699,7 @@ static int dspi_probe(struct platform_device *pdev)
goto out_master_put; goto out_master_put;
} }
dspi->regmap = devm_regmap_init_mmio_clk(&pdev->dev, "dspi", base, dspi->regmap = devm_regmap_init_mmio_clk(&pdev->dev, NULL, base,
&dspi_regmap_config); &dspi_regmap_config);
if (IS_ERR(dspi->regmap)) { if (IS_ERR(dspi->regmap)) {
dev_err(&pdev->dev, "failed to init regmap: %ld\n", dev_err(&pdev->dev, "failed to init regmap: %ld\n",
......
...@@ -561,9 +561,13 @@ void fsl_espi_cpu_irq(struct mpc8xxx_spi *mspi, u32 events) ...@@ -561,9 +561,13 @@ void fsl_espi_cpu_irq(struct mpc8xxx_spi *mspi, u32 events)
/* spin until TX is done */ /* spin until TX is done */
ret = spin_event_timeout(((events = mpc8xxx_spi_read_reg( ret = spin_event_timeout(((events = mpc8xxx_spi_read_reg(
&reg_base->event)) & SPIE_NF) == 0, 1000, 0); &reg_base->event)) & SPIE_NF), 1000, 0);
if (!ret) { if (!ret) {
dev_err(mspi->dev, "tired waiting for SPIE_NF\n"); dev_err(mspi->dev, "tired waiting for SPIE_NF\n");
/* Clear the SPIE bits */
mpc8xxx_spi_write_reg(&reg_base->event, events);
complete(&mspi->done);
return; return;
} }
} }
......
...@@ -674,7 +674,7 @@ static struct spi_imx_devtype_data imx51_ecspi_devtype_data = { ...@@ -674,7 +674,7 @@ static struct spi_imx_devtype_data imx51_ecspi_devtype_data = {
.devtype = IMX51_ECSPI, .devtype = IMX51_ECSPI,
}; };
static struct platform_device_id spi_imx_devtype[] = { static const struct platform_device_id spi_imx_devtype[] = {
{ {
.name = "imx1-cspi", .name = "imx1-cspi",
.driver_data = (kernel_ulong_t) &imx1_cspi_devtype_data, .driver_data = (kernel_ulong_t) &imx1_cspi_devtype_data,
......
...@@ -35,6 +35,7 @@ ...@@ -35,6 +35,7 @@
#include <linux/gcd.h> #include <linux/gcd.h>
#include <linux/spi/spi.h> #include <linux/spi/spi.h>
#include <linux/gpio.h>
#include <linux/platform_data/spi-omap2-mcspi.h> #include <linux/platform_data/spi-omap2-mcspi.h>
...@@ -242,17 +243,27 @@ static void omap2_mcspi_set_enable(const struct spi_device *spi, int enable) ...@@ -242,17 +243,27 @@ static void omap2_mcspi_set_enable(const struct spi_device *spi, int enable)
mcspi_read_cs_reg(spi, OMAP2_MCSPI_CHCTRL0); mcspi_read_cs_reg(spi, OMAP2_MCSPI_CHCTRL0);
} }
static void omap2_mcspi_force_cs(struct spi_device *spi, int cs_active) static void omap2_mcspi_set_cs(struct spi_device *spi, bool enable)
{ {
u32 l; u32 l;
l = mcspi_cached_chconf0(spi); /* The controller handles the inverted chip selects
if (cs_active) * using the OMAP2_MCSPI_CHCONF_EPOL bit so revert
l |= OMAP2_MCSPI_CHCONF_FORCE; * the inversion from the core spi_set_cs function.
else */
l &= ~OMAP2_MCSPI_CHCONF_FORCE; if (spi->mode & SPI_CS_HIGH)
enable = !enable;
mcspi_write_chconf0(spi, l); if (spi->controller_state) {
l = mcspi_cached_chconf0(spi);
if (enable)
l &= ~OMAP2_MCSPI_CHCONF_FORCE;
else
l |= OMAP2_MCSPI_CHCONF_FORCE;
mcspi_write_chconf0(spi, l);
}
} }
static void omap2_mcspi_set_master_mode(struct spi_master *master) static void omap2_mcspi_set_master_mode(struct spi_master *master)
...@@ -1011,6 +1022,15 @@ static int omap2_mcspi_setup(struct spi_device *spi) ...@@ -1011,6 +1022,15 @@ static int omap2_mcspi_setup(struct spi_device *spi)
return ret; return ret;
} }
if (gpio_is_valid(spi->cs_gpio)) {
ret = gpio_request(spi->cs_gpio, dev_name(&spi->dev));
if (ret) {
dev_err(&spi->dev, "failed to request gpio\n");
return ret;
}
gpio_direction_output(spi->cs_gpio, !(spi->mode & SPI_CS_HIGH));
}
ret = pm_runtime_get_sync(mcspi->dev); ret = pm_runtime_get_sync(mcspi->dev);
if (ret < 0) if (ret < 0)
return ret; return ret;
...@@ -1050,9 +1070,13 @@ static void omap2_mcspi_cleanup(struct spi_device *spi) ...@@ -1050,9 +1070,13 @@ static void omap2_mcspi_cleanup(struct spi_device *spi)
mcspi_dma->dma_tx = NULL; mcspi_dma->dma_tx = NULL;
} }
} }
if (gpio_is_valid(spi->cs_gpio))
gpio_free(spi->cs_gpio);
} }
static void omap2_mcspi_work(struct omap2_mcspi *mcspi, struct spi_message *m) static int omap2_mcspi_work_one(struct omap2_mcspi *mcspi,
struct spi_device *spi, struct spi_transfer *t)
{ {
/* We only enable one channel at a time -- the one whose message is /* We only enable one channel at a time -- the one whose message is
...@@ -1062,18 +1086,14 @@ static void omap2_mcspi_work(struct omap2_mcspi *mcspi, struct spi_message *m) ...@@ -1062,18 +1086,14 @@ static void omap2_mcspi_work(struct omap2_mcspi *mcspi, struct spi_message *m)
* chipselect with the FORCE bit ... CS != channel enable. * chipselect with the FORCE bit ... CS != channel enable.
*/ */
struct spi_device *spi;
struct spi_transfer *t = NULL;
struct spi_master *master; struct spi_master *master;
struct omap2_mcspi_dma *mcspi_dma; struct omap2_mcspi_dma *mcspi_dma;
int cs_active = 0;
struct omap2_mcspi_cs *cs; struct omap2_mcspi_cs *cs;
struct omap2_mcspi_device_config *cd; struct omap2_mcspi_device_config *cd;
int par_override = 0; int par_override = 0;
int status = 0; int status = 0;
u32 chconf; u32 chconf;
spi = m->spi;
master = spi->master; master = spi->master;
mcspi_dma = mcspi->dma_channels + spi->chip_select; mcspi_dma = mcspi->dma_channels + spi->chip_select;
cs = spi->controller_state; cs = spi->controller_state;
...@@ -1090,103 +1110,84 @@ static void omap2_mcspi_work(struct omap2_mcspi *mcspi, struct spi_message *m) ...@@ -1090,103 +1110,84 @@ static void omap2_mcspi_work(struct omap2_mcspi *mcspi, struct spi_message *m)
par_override = 1; par_override = 1;
omap2_mcspi_set_enable(spi, 0); omap2_mcspi_set_enable(spi, 0);
list_for_each_entry(t, &m->transfers, transfer_list) {
if (t->tx_buf == NULL && t->rx_buf == NULL && t->len) {
status = -EINVAL;
break;
}
if (par_override ||
(t->speed_hz != spi->max_speed_hz) ||
(t->bits_per_word != spi->bits_per_word)) {
par_override = 1;
status = omap2_mcspi_setup_transfer(spi, t);
if (status < 0)
break;
if (t->speed_hz == spi->max_speed_hz &&
t->bits_per_word == spi->bits_per_word)
par_override = 0;
}
if (cd && cd->cs_per_word) {
chconf = mcspi->ctx.modulctrl;
chconf &= ~OMAP2_MCSPI_MODULCTRL_SINGLE;
mcspi_write_reg(master, OMAP2_MCSPI_MODULCTRL, chconf);
mcspi->ctx.modulctrl =
mcspi_read_cs_reg(spi, OMAP2_MCSPI_MODULCTRL);
}
if (gpio_is_valid(spi->cs_gpio))
omap2_mcspi_set_cs(spi, spi->mode & SPI_CS_HIGH);
if (!cs_active) { if (par_override ||
omap2_mcspi_force_cs(spi, 1); (t->speed_hz != spi->max_speed_hz) ||
cs_active = 1; (t->bits_per_word != spi->bits_per_word)) {
} par_override = 1;
status = omap2_mcspi_setup_transfer(spi, t);
chconf = mcspi_cached_chconf0(spi); if (status < 0)
chconf &= ~OMAP2_MCSPI_CHCONF_TRM_MASK; goto out;
chconf &= ~OMAP2_MCSPI_CHCONF_TURBO; if (t->speed_hz == spi->max_speed_hz &&
t->bits_per_word == spi->bits_per_word)
par_override = 0;
}
if (cd && cd->cs_per_word) {
chconf = mcspi->ctx.modulctrl;
chconf &= ~OMAP2_MCSPI_MODULCTRL_SINGLE;
mcspi_write_reg(master, OMAP2_MCSPI_MODULCTRL, chconf);
mcspi->ctx.modulctrl =
mcspi_read_cs_reg(spi, OMAP2_MCSPI_MODULCTRL);
}
if (t->tx_buf == NULL) chconf = mcspi_cached_chconf0(spi);
chconf |= OMAP2_MCSPI_CHCONF_TRM_RX_ONLY; chconf &= ~OMAP2_MCSPI_CHCONF_TRM_MASK;
else if (t->rx_buf == NULL) chconf &= ~OMAP2_MCSPI_CHCONF_TURBO;
chconf |= OMAP2_MCSPI_CHCONF_TRM_TX_ONLY;
if (t->tx_buf == NULL)
if (cd && cd->turbo_mode && t->tx_buf == NULL) { chconf |= OMAP2_MCSPI_CHCONF_TRM_RX_ONLY;
/* Turbo mode is for more than one word */ else if (t->rx_buf == NULL)
if (t->len > ((cs->word_len + 7) >> 3)) chconf |= OMAP2_MCSPI_CHCONF_TRM_TX_ONLY;
chconf |= OMAP2_MCSPI_CHCONF_TURBO;
} if (cd && cd->turbo_mode && t->tx_buf == NULL) {
/* Turbo mode is for more than one word */
if (t->len > ((cs->word_len + 7) >> 3))
chconf |= OMAP2_MCSPI_CHCONF_TURBO;
}
mcspi_write_chconf0(spi, chconf); mcspi_write_chconf0(spi, chconf);
if (t->len) { if (t->len) {
unsigned count; unsigned count;
if ((mcspi_dma->dma_rx && mcspi_dma->dma_tx) && if ((mcspi_dma->dma_rx && mcspi_dma->dma_tx) &&
(m->is_dma_mapped || t->len >= DMA_MIN_BYTES)) (t->len >= DMA_MIN_BYTES))
omap2_mcspi_set_fifo(spi, t, 1); omap2_mcspi_set_fifo(spi, t, 1);
omap2_mcspi_set_enable(spi, 1); omap2_mcspi_set_enable(spi, 1);
/* RX_ONLY mode needs dummy data in TX reg */ /* RX_ONLY mode needs dummy data in TX reg */
if (t->tx_buf == NULL) if (t->tx_buf == NULL)
writel_relaxed(0, cs->base writel_relaxed(0, cs->base
+ OMAP2_MCSPI_TX0); + OMAP2_MCSPI_TX0);
if ((mcspi_dma->dma_rx && mcspi_dma->dma_tx) && if ((mcspi_dma->dma_rx && mcspi_dma->dma_tx) &&
(m->is_dma_mapped || t->len >= DMA_MIN_BYTES)) (t->len >= DMA_MIN_BYTES))
count = omap2_mcspi_txrx_dma(spi, t); count = omap2_mcspi_txrx_dma(spi, t);
else else
count = omap2_mcspi_txrx_pio(spi, t); count = omap2_mcspi_txrx_pio(spi, t);
m->actual_length += count;
if (count != t->len) { if (count != t->len) {
status = -EIO; status = -EIO;
break; goto out;
}
} }
}
if (t->delay_usecs) omap2_mcspi_set_enable(spi, 0);
udelay(t->delay_usecs);
/* ignore the "leave it on after last xfer" hint */
if (t->cs_change) {
omap2_mcspi_force_cs(spi, 0);
cs_active = 0;
}
omap2_mcspi_set_enable(spi, 0); if (mcspi->fifo_depth > 0)
omap2_mcspi_set_fifo(spi, t, 0);
if (mcspi->fifo_depth > 0) out:
omap2_mcspi_set_fifo(spi, t, 0);
}
/* Restore defaults if they were overriden */ /* Restore defaults if they were overriden */
if (par_override) { if (par_override) {
par_override = 0; par_override = 0;
status = omap2_mcspi_setup_transfer(spi, NULL); status = omap2_mcspi_setup_transfer(spi, NULL);
} }
if (cs_active)
omap2_mcspi_force_cs(spi, 0);
if (cd && cd->cs_per_word) { if (cd && cd->cs_per_word) {
chconf = mcspi->ctx.modulctrl; chconf = mcspi->ctx.modulctrl;
chconf |= OMAP2_MCSPI_MODULCTRL_SINGLE; chconf |= OMAP2_MCSPI_MODULCTRL_SINGLE;
...@@ -1197,78 +1198,64 @@ static void omap2_mcspi_work(struct omap2_mcspi *mcspi, struct spi_message *m) ...@@ -1197,78 +1198,64 @@ static void omap2_mcspi_work(struct omap2_mcspi *mcspi, struct spi_message *m)
omap2_mcspi_set_enable(spi, 0); omap2_mcspi_set_enable(spi, 0);
if (gpio_is_valid(spi->cs_gpio))
omap2_mcspi_set_cs(spi, !(spi->mode & SPI_CS_HIGH));
if (mcspi->fifo_depth > 0 && t) if (mcspi->fifo_depth > 0 && t)
omap2_mcspi_set_fifo(spi, t, 0); omap2_mcspi_set_fifo(spi, t, 0);
m->status = status; return status;
} }
static int omap2_mcspi_transfer_one_message(struct spi_master *master, static int omap2_mcspi_transfer_one(struct spi_master *master,
struct spi_message *m) struct spi_device *spi, struct spi_transfer *t)
{ {
struct spi_device *spi;
struct omap2_mcspi *mcspi; struct omap2_mcspi *mcspi;
struct omap2_mcspi_dma *mcspi_dma; struct omap2_mcspi_dma *mcspi_dma;
struct spi_transfer *t; const void *tx_buf = t->tx_buf;
int status; void *rx_buf = t->rx_buf;
unsigned len = t->len;
spi = m->spi;
mcspi = spi_master_get_devdata(master); mcspi = spi_master_get_devdata(master);
mcspi_dma = mcspi->dma_channels + spi->chip_select; mcspi_dma = mcspi->dma_channels + spi->chip_select;
m->actual_length = 0;
m->status = 0;
list_for_each_entry(t, &m->transfers, transfer_list) {
const void *tx_buf = t->tx_buf;
void *rx_buf = t->rx_buf;
unsigned len = t->len;
if ((len && !(rx_buf || tx_buf))) {
dev_dbg(mcspi->dev, "transfer: %d Hz, %d %s%s, %d bpw\n",
t->speed_hz,
len,
tx_buf ? "tx" : "",
rx_buf ? "rx" : "",
t->bits_per_word);
status = -EINVAL;
goto out;
}
if (m->is_dma_mapped || len < DMA_MIN_BYTES) if ((len && !(rx_buf || tx_buf))) {
continue; dev_dbg(mcspi->dev, "transfer: %d Hz, %d %s%s, %d bpw\n",
t->speed_hz,
if (mcspi_dma->dma_tx && tx_buf != NULL) { len,
t->tx_dma = dma_map_single(mcspi->dev, (void *) tx_buf, tx_buf ? "tx" : "",
len, DMA_TO_DEVICE); rx_buf ? "rx" : "",
if (dma_mapping_error(mcspi->dev, t->tx_dma)) { t->bits_per_word);
dev_dbg(mcspi->dev, "dma %cX %d bytes error\n", return -EINVAL;
'T', len); }
status = -EINVAL;
goto out; if (len < DMA_MIN_BYTES)
} goto skip_dma_map;
if (mcspi_dma->dma_tx && tx_buf != NULL) {
t->tx_dma = dma_map_single(mcspi->dev, (void *) tx_buf,
len, DMA_TO_DEVICE);
if (dma_mapping_error(mcspi->dev, t->tx_dma)) {
dev_dbg(mcspi->dev, "dma %cX %d bytes error\n",
'T', len);
return -EINVAL;
} }
if (mcspi_dma->dma_rx && rx_buf != NULL) { }
t->rx_dma = dma_map_single(mcspi->dev, rx_buf, t->len, if (mcspi_dma->dma_rx && rx_buf != NULL) {
DMA_FROM_DEVICE); t->rx_dma = dma_map_single(mcspi->dev, rx_buf, t->len,
if (dma_mapping_error(mcspi->dev, t->rx_dma)) { DMA_FROM_DEVICE);
dev_dbg(mcspi->dev, "dma %cX %d bytes error\n", if (dma_mapping_error(mcspi->dev, t->rx_dma)) {
'R', len); dev_dbg(mcspi->dev, "dma %cX %d bytes error\n",
if (tx_buf != NULL) 'R', len);
dma_unmap_single(mcspi->dev, t->tx_dma, if (tx_buf != NULL)
len, DMA_TO_DEVICE); dma_unmap_single(mcspi->dev, t->tx_dma,
status = -EINVAL; len, DMA_TO_DEVICE);
goto out; return -EINVAL;
}
} }
} }
omap2_mcspi_work(mcspi, m); skip_dma_map:
/* spi_finalize_current_message() changes the status inside the return omap2_mcspi_work_one(mcspi, spi, t);
* spi_message, save the status here. */
status = m->status;
out:
spi_finalize_current_message(master);
return status;
} }
static int omap2_mcspi_master_setup(struct omap2_mcspi *mcspi) static int omap2_mcspi_master_setup(struct omap2_mcspi *mcspi)
...@@ -1347,7 +1334,8 @@ static int omap2_mcspi_probe(struct platform_device *pdev) ...@@ -1347,7 +1334,8 @@ static int omap2_mcspi_probe(struct platform_device *pdev)
master->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32); master->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32);
master->setup = omap2_mcspi_setup; master->setup = omap2_mcspi_setup;
master->auto_runtime_pm = true; master->auto_runtime_pm = true;
master->transfer_one_message = omap2_mcspi_transfer_one_message; master->transfer_one = omap2_mcspi_transfer_one;
master->set_cs = omap2_mcspi_set_cs;
master->cleanup = omap2_mcspi_cleanup; master->cleanup = omap2_mcspi_cleanup;
master->dev.of_node = node; master->dev.of_node = node;
master->max_speed_hz = OMAP2_MCSPI_MAX_FREQ; master->max_speed_hz = OMAP2_MCSPI_MAX_FREQ;
......
...@@ -61,6 +61,12 @@ enum orion_spi_type { ...@@ -61,6 +61,12 @@ enum orion_spi_type {
struct orion_spi_dev { struct orion_spi_dev {
enum orion_spi_type typ; enum orion_spi_type typ;
/*
* min_divisor and max_hz should be exclusive, the only we can
* have both is for managing the armada-370-spi case with old
* device tree
*/
unsigned long max_hz;
unsigned int min_divisor; unsigned int min_divisor;
unsigned int max_divisor; unsigned int max_divisor;
u32 prescale_mask; u32 prescale_mask;
...@@ -385,16 +391,54 @@ static const struct orion_spi_dev orion_spi_dev_data = { ...@@ -385,16 +391,54 @@ static const struct orion_spi_dev orion_spi_dev_data = {
.prescale_mask = ORION_SPI_CLK_PRESCALE_MASK, .prescale_mask = ORION_SPI_CLK_PRESCALE_MASK,
}; };
static const struct orion_spi_dev armada_spi_dev_data = { static const struct orion_spi_dev armada_370_spi_dev_data = {
.typ = ARMADA_SPI, .typ = ARMADA_SPI,
.min_divisor = 1, .min_divisor = 4,
.max_divisor = 1920,
.max_hz = 50000000,
.prescale_mask = ARMADA_SPI_CLK_PRESCALE_MASK,
};
static const struct orion_spi_dev armada_xp_spi_dev_data = {
.typ = ARMADA_SPI,
.max_hz = 50000000,
.max_divisor = 1920,
.prescale_mask = ARMADA_SPI_CLK_PRESCALE_MASK,
};
static const struct orion_spi_dev armada_375_spi_dev_data = {
.typ = ARMADA_SPI,
.min_divisor = 15,
.max_divisor = 1920, .max_divisor = 1920,
.prescale_mask = ARMADA_SPI_CLK_PRESCALE_MASK, .prescale_mask = ARMADA_SPI_CLK_PRESCALE_MASK,
}; };
static const struct of_device_id orion_spi_of_match_table[] = { static const struct of_device_id orion_spi_of_match_table[] = {
{ .compatible = "marvell,orion-spi", .data = &orion_spi_dev_data, }, {
{ .compatible = "marvell,armada-370-spi", .data = &armada_spi_dev_data, }, .compatible = "marvell,orion-spi",
.data = &orion_spi_dev_data,
},
{
.compatible = "marvell,armada-370-spi",
.data = &armada_370_spi_dev_data,
},
{
.compatible = "marvell,armada-375-spi",
.data = &armada_375_spi_dev_data,
},
{
.compatible = "marvell,armada-380-spi",
.data = &armada_xp_spi_dev_data,
},
{
.compatible = "marvell,armada-390-spi",
.data = &armada_xp_spi_dev_data,
},
{
.compatible = "marvell,armada-xp-spi",
.data = &armada_xp_spi_dev_data,
},
{} {}
}; };
MODULE_DEVICE_TABLE(of, orion_spi_of_match_table); MODULE_DEVICE_TABLE(of, orion_spi_of_match_table);
...@@ -454,7 +498,23 @@ static int orion_spi_probe(struct platform_device *pdev) ...@@ -454,7 +498,23 @@ static int orion_spi_probe(struct platform_device *pdev)
goto out; goto out;
tclk_hz = clk_get_rate(spi->clk); tclk_hz = clk_get_rate(spi->clk);
master->max_speed_hz = DIV_ROUND_UP(tclk_hz, devdata->min_divisor);
/*
* With old device tree, armada-370-spi could be used with
* Armada XP, however for this SoC the maximum frequency is
* 50MHz instead of tclk/4. On Armada 370, tclk cannot be
* higher than 200MHz. So, in order to be able to handle both
* SoCs, we can take the minimum of 50MHz and tclk/4.
*/
if (of_device_is_compatible(pdev->dev.of_node,
"marvell,armada-370-spi"))
master->max_speed_hz = min(devdata->max_hz,
DIV_ROUND_UP(tclk_hz, devdata->min_divisor));
else if (devdata->min_divisor)
master->max_speed_hz =
DIV_ROUND_UP(tclk_hz, devdata->min_divisor);
else
master->max_speed_hz = devdata->max_hz;
master->min_speed_hz = DIV_ROUND_UP(tclk_hz, devdata->max_divisor); master->min_speed_hz = DIV_ROUND_UP(tclk_hz, devdata->max_divisor);
r = platform_get_resource(pdev, IORESOURCE_MEM, 0); r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
......
...@@ -62,7 +62,7 @@ static struct pxa_spi_info spi_info_configs[] = { ...@@ -62,7 +62,7 @@ static struct pxa_spi_info spi_info_configs[] = {
.max_clk_rate = 3686400, .max_clk_rate = 3686400,
}, },
[PORT_BYT] = { [PORT_BYT] = {
.type = LPSS_SSP, .type = LPSS_BYT_SSP,
.port_id = 0, .port_id = 0,
.num_chipselect = 1, .num_chipselect = 1,
.max_clk_rate = 50000000, .max_clk_rate = 50000000,
...@@ -70,7 +70,7 @@ static struct pxa_spi_info spi_info_configs[] = { ...@@ -70,7 +70,7 @@ static struct pxa_spi_info spi_info_configs[] = {
.rx_param = &byt_rx_param, .rx_param = &byt_rx_param,
}, },
[PORT_BSW0] = { [PORT_BSW0] = {
.type = LPSS_SSP, .type = LPSS_BYT_SSP,
.port_id = 0, .port_id = 0,
.num_chipselect = 1, .num_chipselect = 1,
.max_clk_rate = 50000000, .max_clk_rate = 50000000,
...@@ -78,7 +78,7 @@ static struct pxa_spi_info spi_info_configs[] = { ...@@ -78,7 +78,7 @@ static struct pxa_spi_info spi_info_configs[] = {
.rx_param = &bsw0_rx_param, .rx_param = &bsw0_rx_param,
}, },
[PORT_BSW1] = { [PORT_BSW1] = {
.type = LPSS_SSP, .type = LPSS_BYT_SSP,
.port_id = 1, .port_id = 1,
.num_chipselect = 1, .num_chipselect = 1,
.max_clk_rate = 50000000, .max_clk_rate = 50000000,
...@@ -86,7 +86,7 @@ static struct pxa_spi_info spi_info_configs[] = { ...@@ -86,7 +86,7 @@ static struct pxa_spi_info spi_info_configs[] = {
.rx_param = &bsw1_rx_param, .rx_param = &bsw1_rx_param,
}, },
[PORT_BSW2] = { [PORT_BSW2] = {
.type = LPSS_SSP, .type = LPSS_BYT_SSP,
.port_id = 2, .port_id = 2,
.num_chipselect = 1, .num_chipselect = 1,
.max_clk_rate = 50000000, .max_clk_rate = 50000000,
......
/*
* PXA2xx SPI private DMA support.
*
* Copyright (C) 2005 Stephen Street / StreetFire Sound Labs
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/dma-mapping.h>
#include <linux/pxa2xx_ssp.h>
#include <linux/spi/spi.h>
#include <linux/spi/pxa2xx_spi.h>
#include <mach/dma.h>
#include "spi-pxa2xx.h"
#define DMA_INT_MASK (DCSR_ENDINTR | DCSR_STARTINTR | DCSR_BUSERR)
#define RESET_DMA_CHANNEL (DCSR_NODESC | DMA_INT_MASK)
bool pxa2xx_spi_dma_is_possible(size_t len)
{
/* Try to map dma buffer and do a dma transfer if successful, but
* only if the length is non-zero and less than MAX_DMA_LEN.
*
* Zero-length non-descriptor DMA is illegal on PXA2xx; force use
* of PIO instead. Care is needed above because the transfer may
* have have been passed with buffers that are already dma mapped.
* A zero-length transfer in PIO mode will not try to write/read
* to/from the buffers
*
* REVISIT large transfers are exactly where we most want to be
* using DMA. If this happens much, split those transfers into
* multiple DMA segments rather than forcing PIO.
*/
return len > 0 && len <= MAX_DMA_LEN;
}
int pxa2xx_spi_map_dma_buffers(struct driver_data *drv_data)
{
struct spi_message *msg = drv_data->cur_msg;
struct device *dev = &msg->spi->dev;
if (!drv_data->cur_chip->enable_dma)
return 0;
if (msg->is_dma_mapped)
return drv_data->rx_dma && drv_data->tx_dma;
if (!IS_DMA_ALIGNED(drv_data->rx) || !IS_DMA_ALIGNED(drv_data->tx))
return 0;
/* Modify setup if rx buffer is null */
if (drv_data->rx == NULL) {
*drv_data->null_dma_buf = 0;
drv_data->rx = drv_data->null_dma_buf;
drv_data->rx_map_len = 4;
} else
drv_data->rx_map_len = drv_data->len;
/* Modify setup if tx buffer is null */
if (drv_data->tx == NULL) {
*drv_data->null_dma_buf = 0;
drv_data->tx = drv_data->null_dma_buf;
drv_data->tx_map_len = 4;
} else
drv_data->tx_map_len = drv_data->len;
/* Stream map the tx buffer. Always do DMA_TO_DEVICE first
* so we flush the cache *before* invalidating it, in case
* the tx and rx buffers overlap.
*/
drv_data->tx_dma = dma_map_single(dev, drv_data->tx,
drv_data->tx_map_len, DMA_TO_DEVICE);
if (dma_mapping_error(dev, drv_data->tx_dma))
return 0;
/* Stream map the rx buffer */
drv_data->rx_dma = dma_map_single(dev, drv_data->rx,
drv_data->rx_map_len, DMA_FROM_DEVICE);
if (dma_mapping_error(dev, drv_data->rx_dma)) {
dma_unmap_single(dev, drv_data->tx_dma,
drv_data->tx_map_len, DMA_TO_DEVICE);
return 0;
}
return 1;
}
static void pxa2xx_spi_unmap_dma_buffers(struct driver_data *drv_data)
{
struct device *dev;
if (!drv_data->dma_mapped)
return;
if (!drv_data->cur_msg->is_dma_mapped) {
dev = &drv_data->cur_msg->spi->dev;
dma_unmap_single(dev, drv_data->rx_dma,
drv_data->rx_map_len, DMA_FROM_DEVICE);
dma_unmap_single(dev, drv_data->tx_dma,
drv_data->tx_map_len, DMA_TO_DEVICE);
}
drv_data->dma_mapped = 0;
}
static int wait_ssp_rx_stall(struct driver_data *drv_data)
{
unsigned long limit = loops_per_jiffy << 1;
while ((pxa2xx_spi_read(drv_data, SSSR) & SSSR_BSY) && --limit)
cpu_relax();
return limit;
}
static int wait_dma_channel_stop(int channel)
{
unsigned long limit = loops_per_jiffy << 1;
while (!(DCSR(channel) & DCSR_STOPSTATE) && --limit)
cpu_relax();
return limit;
}
static void pxa2xx_spi_dma_error_stop(struct driver_data *drv_data,
const char *msg)
{
/* Stop and reset */
DCSR(drv_data->rx_channel) = RESET_DMA_CHANNEL;
DCSR(drv_data->tx_channel) = RESET_DMA_CHANNEL;
write_SSSR_CS(drv_data, drv_data->clear_sr);
pxa2xx_spi_write(drv_data, SSCR1,
pxa2xx_spi_read(drv_data, SSCR1)
& ~drv_data->dma_cr1);
if (!pxa25x_ssp_comp(drv_data))
pxa2xx_spi_write(drv_data, SSTO, 0);
pxa2xx_spi_flush(drv_data);
pxa2xx_spi_write(drv_data, SSCR0,
pxa2xx_spi_read(drv_data, SSCR0) & ~SSCR0_SSE);
pxa2xx_spi_unmap_dma_buffers(drv_data);
dev_err(&drv_data->pdev->dev, "%s\n", msg);
drv_data->cur_msg->state = ERROR_STATE;
tasklet_schedule(&drv_data->pump_transfers);
}
static void pxa2xx_spi_dma_transfer_complete(struct driver_data *drv_data)
{
struct spi_message *msg = drv_data->cur_msg;
/* Clear and disable interrupts on SSP and DMA channels*/
pxa2xx_spi_write(drv_data, SSCR1,
pxa2xx_spi_read(drv_data, SSCR1)
& ~drv_data->dma_cr1);
write_SSSR_CS(drv_data, drv_data->clear_sr);
DCSR(drv_data->tx_channel) = RESET_DMA_CHANNEL;
DCSR(drv_data->rx_channel) = RESET_DMA_CHANNEL;
if (wait_dma_channel_stop(drv_data->rx_channel) == 0)
dev_err(&drv_data->pdev->dev,
"dma_handler: dma rx channel stop failed\n");
if (wait_ssp_rx_stall(drv_data->ioaddr) == 0)
dev_err(&drv_data->pdev->dev,
"dma_transfer: ssp rx stall failed\n");
pxa2xx_spi_unmap_dma_buffers(drv_data);
/* update the buffer pointer for the amount completed in dma */
drv_data->rx += drv_data->len -
(DCMD(drv_data->rx_channel) & DCMD_LENGTH);
/* read trailing data from fifo, it does not matter how many
* bytes are in the fifo just read until buffer is full
* or fifo is empty, which ever occurs first */
drv_data->read(drv_data);
/* return count of what was actually read */
msg->actual_length += drv_data->len -
(drv_data->rx_end - drv_data->rx);
/* Transfer delays and chip select release are
* handled in pump_transfers or giveback
*/
/* Move to next transfer */
msg->state = pxa2xx_spi_next_transfer(drv_data);
/* Schedule transfer tasklet */
tasklet_schedule(&drv_data->pump_transfers);
}
void pxa2xx_spi_dma_handler(int channel, void *data)
{
struct driver_data *drv_data = data;
u32 irq_status = DCSR(channel) & DMA_INT_MASK;
if (irq_status & DCSR_BUSERR) {
if (channel == drv_data->tx_channel)
pxa2xx_spi_dma_error_stop(drv_data,
"dma_handler: bad bus address on tx channel");
else
pxa2xx_spi_dma_error_stop(drv_data,
"dma_handler: bad bus address on rx channel");
return;
}
/* PXA255x_SSP has no timeout interrupt, wait for tailing bytes */
if ((channel == drv_data->tx_channel)
&& (irq_status & DCSR_ENDINTR)
&& (drv_data->ssp_type == PXA25x_SSP)) {
/* Wait for rx to stall */
if (wait_ssp_rx_stall(drv_data) == 0)
dev_err(&drv_data->pdev->dev,
"dma_handler: ssp rx stall failed\n");
/* finish this transfer, start the next */
pxa2xx_spi_dma_transfer_complete(drv_data);
}
}
irqreturn_t pxa2xx_spi_dma_transfer(struct driver_data *drv_data)
{
u32 irq_status;
irq_status = pxa2xx_spi_read(drv_data, SSSR) & drv_data->mask_sr;
if (irq_status & SSSR_ROR) {
pxa2xx_spi_dma_error_stop(drv_data,
"dma_transfer: fifo overrun");
return IRQ_HANDLED;
}
/* Check for false positive timeout */
if ((irq_status & SSSR_TINT)
&& (DCSR(drv_data->tx_channel) & DCSR_RUN)) {
pxa2xx_spi_write(drv_data, SSSR, SSSR_TINT);
return IRQ_HANDLED;
}
if (irq_status & SSSR_TINT || drv_data->rx == drv_data->rx_end) {
/* Clear and disable timeout interrupt, do the rest in
* dma_transfer_complete */
if (!pxa25x_ssp_comp(drv_data))
pxa2xx_spi_write(drv_data, SSTO, 0);
/* finish this transfer, start the next */
pxa2xx_spi_dma_transfer_complete(drv_data);
return IRQ_HANDLED;
}
/* Opps problem detected */
return IRQ_NONE;
}
int pxa2xx_spi_dma_prepare(struct driver_data *drv_data, u32 dma_burst)
{
u32 dma_width;
switch (drv_data->n_bytes) {
case 1:
dma_width = DCMD_WIDTH1;
break;
case 2:
dma_width = DCMD_WIDTH2;
break;
default:
dma_width = DCMD_WIDTH4;
break;
}
/* Setup rx DMA Channel */
DCSR(drv_data->rx_channel) = RESET_DMA_CHANNEL;
DSADR(drv_data->rx_channel) = drv_data->ssdr_physical;
DTADR(drv_data->rx_channel) = drv_data->rx_dma;
if (drv_data->rx == drv_data->null_dma_buf)
/* No target address increment */
DCMD(drv_data->rx_channel) = DCMD_FLOWSRC
| dma_width
| dma_burst
| drv_data->len;
else
DCMD(drv_data->rx_channel) = DCMD_INCTRGADDR
| DCMD_FLOWSRC
| dma_width
| dma_burst
| drv_data->len;
/* Setup tx DMA Channel */
DCSR(drv_data->tx_channel) = RESET_DMA_CHANNEL;
DSADR(drv_data->tx_channel) = drv_data->tx_dma;
DTADR(drv_data->tx_channel) = drv_data->ssdr_physical;
if (drv_data->tx == drv_data->null_dma_buf)
/* No source address increment */
DCMD(drv_data->tx_channel) = DCMD_FLOWTRG
| dma_width
| dma_burst
| drv_data->len;
else
DCMD(drv_data->tx_channel) = DCMD_INCSRCADDR
| DCMD_FLOWTRG
| dma_width
| dma_burst
| drv_data->len;
/* Enable dma end irqs on SSP to detect end of transfer */
if (drv_data->ssp_type == PXA25x_SSP)
DCMD(drv_data->tx_channel) |= DCMD_ENDIRQEN;
return 0;
}
void pxa2xx_spi_dma_start(struct driver_data *drv_data)
{
DCSR(drv_data->rx_channel) |= DCSR_RUN;
DCSR(drv_data->tx_channel) |= DCSR_RUN;
}
int pxa2xx_spi_dma_setup(struct driver_data *drv_data)
{
struct device *dev = &drv_data->pdev->dev;
struct ssp_device *ssp = drv_data->ssp;
/* Get two DMA channels (rx and tx) */
drv_data->rx_channel = pxa_request_dma("pxa2xx_spi_ssp_rx",
DMA_PRIO_HIGH,
pxa2xx_spi_dma_handler,
drv_data);
if (drv_data->rx_channel < 0) {
dev_err(dev, "problem (%d) requesting rx channel\n",
drv_data->rx_channel);
return -ENODEV;
}
drv_data->tx_channel = pxa_request_dma("pxa2xx_spi_ssp_tx",
DMA_PRIO_MEDIUM,
pxa2xx_spi_dma_handler,
drv_data);
if (drv_data->tx_channel < 0) {
dev_err(dev, "problem (%d) requesting tx channel\n",
drv_data->tx_channel);
pxa_free_dma(drv_data->rx_channel);
return -ENODEV;
}
DRCMR(ssp->drcmr_rx) = DRCMR_MAPVLD | drv_data->rx_channel;
DRCMR(ssp->drcmr_tx) = DRCMR_MAPVLD | drv_data->tx_channel;
return 0;
}
void pxa2xx_spi_dma_release(struct driver_data *drv_data)
{
struct ssp_device *ssp = drv_data->ssp;
DRCMR(ssp->drcmr_rx) = 0;
DRCMR(ssp->drcmr_tx) = 0;
if (drv_data->tx_channel != 0)
pxa_free_dma(drv_data->tx_channel);
if (drv_data->rx_channel != 0)
pxa_free_dma(drv_data->rx_channel);
}
void pxa2xx_spi_dma_resume(struct driver_data *drv_data)
{
if (drv_data->rx_channel != -1)
DRCMR(drv_data->ssp->drcmr_rx) =
DRCMR_MAPVLD | drv_data->rx_channel;
if (drv_data->tx_channel != -1)
DRCMR(drv_data->ssp->drcmr_tx) =
DRCMR_MAPVLD | drv_data->tx_channel;
}
int pxa2xx_spi_set_dma_burst_and_threshold(struct chip_data *chip,
struct spi_device *spi,
u8 bits_per_word, u32 *burst_code,
u32 *threshold)
{
struct pxa2xx_spi_chip *chip_info =
(struct pxa2xx_spi_chip *)spi->controller_data;
int bytes_per_word;
int burst_bytes;
int thresh_words;
int req_burst_size;
int retval = 0;
/* Set the threshold (in registers) to equal the same amount of data
* as represented by burst size (in bytes). The computation below
* is (burst_size rounded up to nearest 8 byte, word or long word)
* divided by (bytes/register); the tx threshold is the inverse of
* the rx, so that there will always be enough data in the rx fifo
* to satisfy a burst, and there will always be enough space in the
* tx fifo to accept a burst (a tx burst will overwrite the fifo if
* there is not enough space), there must always remain enough empty
* space in the rx fifo for any data loaded to the tx fifo.
* Whenever burst_size (in bytes) equals bits/word, the fifo threshold
* will be 8, or half the fifo;
* The threshold can only be set to 2, 4 or 8, but not 16, because
* to burst 16 to the tx fifo, the fifo would have to be empty;
* however, the minimum fifo trigger level is 1, and the tx will
* request service when the fifo is at this level, with only 15 spaces.
*/
/* find bytes/word */
if (bits_per_word <= 8)
bytes_per_word = 1;
else if (bits_per_word <= 16)
bytes_per_word = 2;
else
bytes_per_word = 4;
/* use struct pxa2xx_spi_chip->dma_burst_size if available */
if (chip_info)
req_burst_size = chip_info->dma_burst_size;
else {
switch (chip->dma_burst_size) {
default:
/* if the default burst size is not set,
* do it now */
chip->dma_burst_size = DCMD_BURST8;
case DCMD_BURST8:
req_burst_size = 8;
break;
case DCMD_BURST16:
req_burst_size = 16;
break;
case DCMD_BURST32:
req_burst_size = 32;
break;
}
}
if (req_burst_size <= 8) {
*burst_code = DCMD_BURST8;
burst_bytes = 8;
} else if (req_burst_size <= 16) {
if (bytes_per_word == 1) {
/* don't burst more than 1/2 the fifo */
*burst_code = DCMD_BURST8;
burst_bytes = 8;
retval = 1;
} else {
*burst_code = DCMD_BURST16;
burst_bytes = 16;
}
} else {
if (bytes_per_word == 1) {
/* don't burst more than 1/2 the fifo */
*burst_code = DCMD_BURST8;
burst_bytes = 8;
retval = 1;
} else if (bytes_per_word == 2) {
/* don't burst more than 1/2 the fifo */
*burst_code = DCMD_BURST16;
burst_bytes = 16;
retval = 1;
} else {
*burst_code = DCMD_BURST32;
burst_bytes = 32;
}
}
thresh_words = burst_bytes / bytes_per_word;
/* thresh_words will be between 2 and 8 */
*threshold = (SSCR1_RxTresh(thresh_words) & SSCR1_RFT)
| (SSCR1_TxTresh(16-thresh_words) & SSCR1_TFT);
return retval;
}
...@@ -60,21 +60,60 @@ MODULE_ALIAS("platform:pxa2xx-spi"); ...@@ -60,21 +60,60 @@ MODULE_ALIAS("platform:pxa2xx-spi");
| QUARK_X1000_SSCR1_TFT \ | QUARK_X1000_SSCR1_TFT \
| SSCR1_SPH | SSCR1_SPO | SSCR1_LBM) | SSCR1_SPH | SSCR1_SPO | SSCR1_LBM)
#define LPSS_RX_THRESH_DFLT 64
#define LPSS_TX_LOTHRESH_DFLT 160
#define LPSS_TX_HITHRESH_DFLT 224
/* Offset from drv_data->lpss_base */
#define GENERAL_REG 0x08
#define GENERAL_REG_RXTO_HOLDOFF_DISABLE BIT(24) #define GENERAL_REG_RXTO_HOLDOFF_DISABLE BIT(24)
#define SSP_REG 0x0c
#define SPI_CS_CONTROL 0x18
#define SPI_CS_CONTROL_SW_MODE BIT(0) #define SPI_CS_CONTROL_SW_MODE BIT(0)
#define SPI_CS_CONTROL_CS_HIGH BIT(1) #define SPI_CS_CONTROL_CS_HIGH BIT(1)
struct lpss_config {
/* LPSS offset from drv_data->ioaddr */
unsigned offset;
/* Register offsets from drv_data->lpss_base or -1 */
int reg_general;
int reg_ssp;
int reg_cs_ctrl;
/* FIFO thresholds */
u32 rx_threshold;
u32 tx_threshold_lo;
u32 tx_threshold_hi;
};
/* Keep these sorted with enum pxa_ssp_type */
static const struct lpss_config lpss_platforms[] = {
{ /* LPSS_LPT_SSP */
.offset = 0x800,
.reg_general = 0x08,
.reg_ssp = 0x0c,
.reg_cs_ctrl = 0x18,
.rx_threshold = 64,
.tx_threshold_lo = 160,
.tx_threshold_hi = 224,
},
{ /* LPSS_BYT_SSP */
.offset = 0x400,
.reg_general = 0x08,
.reg_ssp = 0x0c,
.reg_cs_ctrl = 0x18,
.rx_threshold = 64,
.tx_threshold_lo = 160,
.tx_threshold_hi = 224,
},
};
static inline const struct lpss_config
*lpss_get_config(const struct driver_data *drv_data)
{
return &lpss_platforms[drv_data->ssp_type - LPSS_LPT_SSP];
}
static bool is_lpss_ssp(const struct driver_data *drv_data) static bool is_lpss_ssp(const struct driver_data *drv_data)
{ {
return drv_data->ssp_type == LPSS_SSP; switch (drv_data->ssp_type) {
case LPSS_LPT_SSP:
case LPSS_BYT_SSP:
return true;
default:
return false;
}
} }
static bool is_quark_x1000_ssp(const struct driver_data *drv_data) static bool is_quark_x1000_ssp(const struct driver_data *drv_data)
...@@ -192,63 +231,43 @@ static void __lpss_ssp_write_priv(struct driver_data *drv_data, ...@@ -192,63 +231,43 @@ static void __lpss_ssp_write_priv(struct driver_data *drv_data,
*/ */
static void lpss_ssp_setup(struct driver_data *drv_data) static void lpss_ssp_setup(struct driver_data *drv_data)
{ {
unsigned offset = 0x400; const struct lpss_config *config;
u32 value, orig; u32 value;
/*
* Perform auto-detection of the LPSS SSP private registers. They
* can be either at 1k or 2k offset from the base address.
*/
orig = readl(drv_data->ioaddr + offset + SPI_CS_CONTROL);
/* Test SPI_CS_CONTROL_SW_MODE bit enabling */
value = orig | SPI_CS_CONTROL_SW_MODE;
writel(value, drv_data->ioaddr + offset + SPI_CS_CONTROL);
value = readl(drv_data->ioaddr + offset + SPI_CS_CONTROL);
if (value != (orig | SPI_CS_CONTROL_SW_MODE)) {
offset = 0x800;
goto detection_done;
}
orig = readl(drv_data->ioaddr + offset + SPI_CS_CONTROL);
/* Test SPI_CS_CONTROL_SW_MODE bit disabling */
value = orig & ~SPI_CS_CONTROL_SW_MODE;
writel(value, drv_data->ioaddr + offset + SPI_CS_CONTROL);
value = readl(drv_data->ioaddr + offset + SPI_CS_CONTROL);
if (value != (orig & ~SPI_CS_CONTROL_SW_MODE)) {
offset = 0x800;
goto detection_done;
}
detection_done: config = lpss_get_config(drv_data);
/* Now set the LPSS base */ drv_data->lpss_base = drv_data->ioaddr + config->offset;
drv_data->lpss_base = drv_data->ioaddr + offset;
/* Enable software chip select control */ /* Enable software chip select control */
value = SPI_CS_CONTROL_SW_MODE | SPI_CS_CONTROL_CS_HIGH; value = SPI_CS_CONTROL_SW_MODE | SPI_CS_CONTROL_CS_HIGH;
__lpss_ssp_write_priv(drv_data, SPI_CS_CONTROL, value); __lpss_ssp_write_priv(drv_data, config->reg_cs_ctrl, value);
/* Enable multiblock DMA transfers */ /* Enable multiblock DMA transfers */
if (drv_data->master_info->enable_dma) { if (drv_data->master_info->enable_dma) {
__lpss_ssp_write_priv(drv_data, SSP_REG, 1); __lpss_ssp_write_priv(drv_data, config->reg_ssp, 1);
value = __lpss_ssp_read_priv(drv_data, GENERAL_REG); if (config->reg_general >= 0) {
value |= GENERAL_REG_RXTO_HOLDOFF_DISABLE; value = __lpss_ssp_read_priv(drv_data,
__lpss_ssp_write_priv(drv_data, GENERAL_REG, value); config->reg_general);
value |= GENERAL_REG_RXTO_HOLDOFF_DISABLE;
__lpss_ssp_write_priv(drv_data,
config->reg_general, value);
}
} }
} }
static void lpss_ssp_cs_control(struct driver_data *drv_data, bool enable) static void lpss_ssp_cs_control(struct driver_data *drv_data, bool enable)
{ {
const struct lpss_config *config;
u32 value; u32 value;
value = __lpss_ssp_read_priv(drv_data, SPI_CS_CONTROL); config = lpss_get_config(drv_data);
value = __lpss_ssp_read_priv(drv_data, config->reg_cs_ctrl);
if (enable) if (enable)
value &= ~SPI_CS_CONTROL_CS_HIGH; value &= ~SPI_CS_CONTROL_CS_HIGH;
else else
value |= SPI_CS_CONTROL_CS_HIGH; value |= SPI_CS_CONTROL_CS_HIGH;
__lpss_ssp_write_priv(drv_data, SPI_CS_CONTROL, value); __lpss_ssp_write_priv(drv_data, config->reg_cs_ctrl, value);
} }
static void cs_assert(struct driver_data *drv_data) static void cs_assert(struct driver_data *drv_data)
...@@ -1075,6 +1094,7 @@ static int setup(struct spi_device *spi) ...@@ -1075,6 +1094,7 @@ static int setup(struct spi_device *spi)
{ {
struct pxa2xx_spi_chip *chip_info = NULL; struct pxa2xx_spi_chip *chip_info = NULL;
struct chip_data *chip; struct chip_data *chip;
const struct lpss_config *config;
struct driver_data *drv_data = spi_master_get_devdata(spi->master); struct driver_data *drv_data = spi_master_get_devdata(spi->master);
unsigned int clk_div; unsigned int clk_div;
uint tx_thres, tx_hi_thres, rx_thres; uint tx_thres, tx_hi_thres, rx_thres;
...@@ -1085,10 +1105,12 @@ static int setup(struct spi_device *spi) ...@@ -1085,10 +1105,12 @@ static int setup(struct spi_device *spi)
tx_hi_thres = 0; tx_hi_thres = 0;
rx_thres = RX_THRESH_QUARK_X1000_DFLT; rx_thres = RX_THRESH_QUARK_X1000_DFLT;
break; break;
case LPSS_SSP: case LPSS_LPT_SSP:
tx_thres = LPSS_TX_LOTHRESH_DFLT; case LPSS_BYT_SSP:
tx_hi_thres = LPSS_TX_HITHRESH_DFLT; config = lpss_get_config(drv_data);
rx_thres = LPSS_RX_THRESH_DFLT; tx_thres = config->tx_threshold_lo;
tx_hi_thres = config->tx_threshold_hi;
rx_thres = config->rx_threshold;
break; break;
default: default:
tx_thres = TX_THRESH_DFLT; tx_thres = TX_THRESH_DFLT;
...@@ -1242,6 +1264,18 @@ static void cleanup(struct spi_device *spi) ...@@ -1242,6 +1264,18 @@ static void cleanup(struct spi_device *spi)
} }
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
static const struct acpi_device_id pxa2xx_spi_acpi_match[] = {
{ "INT33C0", LPSS_LPT_SSP },
{ "INT33C1", LPSS_LPT_SSP },
{ "INT3430", LPSS_LPT_SSP },
{ "INT3431", LPSS_LPT_SSP },
{ "80860F0E", LPSS_BYT_SSP },
{ "8086228E", LPSS_BYT_SSP },
{ },
};
MODULE_DEVICE_TABLE(acpi, pxa2xx_spi_acpi_match);
static struct pxa2xx_spi_master * static struct pxa2xx_spi_master *
pxa2xx_spi_acpi_get_pdata(struct platform_device *pdev) pxa2xx_spi_acpi_get_pdata(struct platform_device *pdev)
{ {
...@@ -1249,12 +1283,19 @@ pxa2xx_spi_acpi_get_pdata(struct platform_device *pdev) ...@@ -1249,12 +1283,19 @@ pxa2xx_spi_acpi_get_pdata(struct platform_device *pdev)
struct acpi_device *adev; struct acpi_device *adev;
struct ssp_device *ssp; struct ssp_device *ssp;
struct resource *res; struct resource *res;
int devid; const struct acpi_device_id *id;
int devid, type;
if (!ACPI_HANDLE(&pdev->dev) || if (!ACPI_HANDLE(&pdev->dev) ||
acpi_bus_get_device(ACPI_HANDLE(&pdev->dev), &adev)) acpi_bus_get_device(ACPI_HANDLE(&pdev->dev), &adev))
return NULL; return NULL;
id = acpi_match_device(pdev->dev.driver->acpi_match_table, &pdev->dev);
if (id)
type = (int)id->driver_data;
else
return NULL;
pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL);
if (!pdata) if (!pdata)
return NULL; return NULL;
...@@ -1272,7 +1313,7 @@ pxa2xx_spi_acpi_get_pdata(struct platform_device *pdev) ...@@ -1272,7 +1313,7 @@ pxa2xx_spi_acpi_get_pdata(struct platform_device *pdev)
ssp->clk = devm_clk_get(&pdev->dev, NULL); ssp->clk = devm_clk_get(&pdev->dev, NULL);
ssp->irq = platform_get_irq(pdev, 0); ssp->irq = platform_get_irq(pdev, 0);
ssp->type = LPSS_SSP; ssp->type = type;
ssp->pdev = pdev; ssp->pdev = pdev;
ssp->port_id = -1; ssp->port_id = -1;
...@@ -1285,16 +1326,6 @@ pxa2xx_spi_acpi_get_pdata(struct platform_device *pdev) ...@@ -1285,16 +1326,6 @@ pxa2xx_spi_acpi_get_pdata(struct platform_device *pdev)
return pdata; return pdata;
} }
static struct acpi_device_id pxa2xx_spi_acpi_match[] = {
{ "INT33C0", 0 },
{ "INT33C1", 0 },
{ "INT3430", 0 },
{ "INT3431", 0 },
{ "80860F0E", 0 },
{ "8086228E", 0 },
{ },
};
MODULE_DEVICE_TABLE(acpi, pxa2xx_spi_acpi_match);
#else #else
static inline struct pxa2xx_spi_master * static inline struct pxa2xx_spi_master *
pxa2xx_spi_acpi_get_pdata(struct platform_device *pdev) pxa2xx_spi_acpi_get_pdata(struct platform_device *pdev)
......
...@@ -162,11 +162,7 @@ extern void *pxa2xx_spi_next_transfer(struct driver_data *drv_data); ...@@ -162,11 +162,7 @@ extern void *pxa2xx_spi_next_transfer(struct driver_data *drv_data);
/* /*
* Select the right DMA implementation. * Select the right DMA implementation.
*/ */
#if defined(CONFIG_SPI_PXA2XX_PXADMA) #if defined(CONFIG_SPI_PXA2XX_DMA)
#define SPI_PXA2XX_USE_DMA 1
#define MAX_DMA_LEN 8191
#define DEFAULT_DMA_CR1 (SSCR1_TSRE | SSCR1_RSRE | SSCR1_TINTE)
#elif defined(CONFIG_SPI_PXA2XX_DMA)
#define SPI_PXA2XX_USE_DMA 1 #define SPI_PXA2XX_USE_DMA 1
#define MAX_DMA_LEN SZ_64K #define MAX_DMA_LEN SZ_64K
#define DEFAULT_DMA_CR1 (SSCR1_TSRE | SSCR1_RSRE | SSCR1_TRAIL) #define DEFAULT_DMA_CR1 (SSCR1_TSRE | SSCR1_RSRE | SSCR1_TRAIL)
......
/*
* SPI controller driver for the Mikrotik RB4xx boards
*
* Copyright (C) 2010 Gabor Juhos <juhosg@openwrt.org>
* Copyright (C) 2015 Bert Vermeulen <bert@biot.com>
*
* This file was based on the patches for Linux 2.6.27.39 published by
* MikroTik for their RouterBoard 4xx series devices.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/clk.h>
#include <linux/spi/spi.h>
#include <asm/mach-ath79/ar71xx_regs.h>
struct rb4xx_spi {
void __iomem *base;
struct clk *clk;
};
static inline u32 rb4xx_read(struct rb4xx_spi *rbspi, u32 reg)
{
return __raw_readl(rbspi->base + reg);
}
static inline void rb4xx_write(struct rb4xx_spi *rbspi, u32 reg, u32 value)
{
__raw_writel(value, rbspi->base + reg);
}
static inline void do_spi_clk(struct rb4xx_spi *rbspi, u32 spi_ioc, int value)
{
u32 regval;
regval = spi_ioc;
if (value & BIT(0))
regval |= AR71XX_SPI_IOC_DO;
rb4xx_write(rbspi, AR71XX_SPI_REG_IOC, regval);
rb4xx_write(rbspi, AR71XX_SPI_REG_IOC, regval | AR71XX_SPI_IOC_CLK);
}
static void do_spi_byte(struct rb4xx_spi *rbspi, u32 spi_ioc, u8 byte)
{
int i;
for (i = 7; i >= 0; i--)
do_spi_clk(rbspi, spi_ioc, byte >> i);
}
/* The CS2 pin is used to clock in a second bit per clock cycle. */
static inline void do_spi_clk_two(struct rb4xx_spi *rbspi, u32 spi_ioc,
u8 value)
{
u32 regval;
regval = spi_ioc;
if (value & BIT(1))
regval |= AR71XX_SPI_IOC_DO;
if (value & BIT(0))
regval |= AR71XX_SPI_IOC_CS2;
rb4xx_write(rbspi, AR71XX_SPI_REG_IOC, regval);
rb4xx_write(rbspi, AR71XX_SPI_REG_IOC, regval | AR71XX_SPI_IOC_CLK);
}
/* Two bits at a time, msb first */
static void do_spi_byte_two(struct rb4xx_spi *rbspi, u32 spi_ioc, u8 byte)
{
do_spi_clk_two(rbspi, spi_ioc, byte >> 6);
do_spi_clk_two(rbspi, spi_ioc, byte >> 4);
do_spi_clk_two(rbspi, spi_ioc, byte >> 2);
do_spi_clk_two(rbspi, spi_ioc, byte >> 0);
}
static void rb4xx_set_cs(struct spi_device *spi, bool enable)
{
struct rb4xx_spi *rbspi = spi_master_get_devdata(spi->master);
/*
* Setting CS is done along with bitbanging the actual values,
* since it's all on the same hardware register. However the
* CPLD needs CS deselected after every command.
*/
if (enable)
rb4xx_write(rbspi, AR71XX_SPI_REG_IOC,
AR71XX_SPI_IOC_CS0 | AR71XX_SPI_IOC_CS1);
}
static int rb4xx_transfer_one(struct spi_master *master,
struct spi_device *spi, struct spi_transfer *t)
{
struct rb4xx_spi *rbspi = spi_master_get_devdata(master);
int i;
u32 spi_ioc;
u8 *rx_buf;
const u8 *tx_buf;
/*
* Prime the SPI register with the SPI device selected. The m25p80 boot
* flash and CPLD share the CS0 pin. This works because the CPLD's
* command set was designed to almost not clash with that of the
* boot flash.
*/
if (spi->chip_select == 2)
/* MMC */
spi_ioc = AR71XX_SPI_IOC_CS0;
else
/* Boot flash and CPLD */
spi_ioc = AR71XX_SPI_IOC_CS1;
tx_buf = t->tx_buf;
rx_buf = t->rx_buf;
for (i = 0; i < t->len; ++i) {
if (t->tx_nbits == SPI_NBITS_DUAL)
/* CPLD can use two-wire transfers */
do_spi_byte_two(rbspi, spi_ioc, tx_buf[i]);
else
do_spi_byte(rbspi, spi_ioc, tx_buf[i]);
if (!rx_buf)
continue;
rx_buf[i] = rb4xx_read(rbspi, AR71XX_SPI_REG_RDS);
}
spi_finalize_current_transfer(master);
return 0;
}
static int rb4xx_spi_probe(struct platform_device *pdev)
{
struct spi_master *master;
struct clk *ahb_clk;
struct rb4xx_spi *rbspi;
struct resource *r;
int err;
void __iomem *spi_base;
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
spi_base = devm_ioremap_resource(&pdev->dev, r);
if (IS_ERR(spi_base))
return PTR_ERR(spi_base);
master = spi_alloc_master(&pdev->dev, sizeof(*rbspi));
if (!master)
return -ENOMEM;
ahb_clk = devm_clk_get(&pdev->dev, "ahb");
if (IS_ERR(ahb_clk))
return PTR_ERR(ahb_clk);
master->bus_num = 0;
master->num_chipselect = 3;
master->mode_bits = SPI_TX_DUAL;
master->bits_per_word_mask = BIT(7);
master->flags = SPI_MASTER_MUST_TX;
master->transfer_one = rb4xx_transfer_one;
master->set_cs = rb4xx_set_cs;
err = devm_spi_register_master(&pdev->dev, master);
if (err) {
dev_err(&pdev->dev, "failed to register SPI master\n");
return err;
}
err = clk_prepare_enable(ahb_clk);
if (err)
return err;
rbspi = spi_master_get_devdata(master);
rbspi->base = spi_base;
rbspi->clk = ahb_clk;
platform_set_drvdata(pdev, rbspi);
/* Enable SPI */
rb4xx_write(rbspi, AR71XX_SPI_REG_FS, AR71XX_SPI_FS_GPIO);
return 0;
}
static int rb4xx_spi_remove(struct platform_device *pdev)
{
struct rb4xx_spi *rbspi = platform_get_drvdata(pdev);
clk_disable_unprepare(rbspi->clk);
return 0;
}
static struct platform_driver rb4xx_spi_drv = {
.probe = rb4xx_spi_probe,
.remove = rb4xx_spi_remove,
.driver = {
.name = "rb4xx-spi",
},
};
module_platform_driver(rb4xx_spi_drv);
MODULE_DESCRIPTION("Mikrotik RB4xx SPI controller driver");
MODULE_AUTHOR("Gabor Juhos <juhosg@openwrt.org>");
MODULE_AUTHOR("Bert Vermeulen <bert@biot.com>");
MODULE_LICENSE("GPL v2");
...@@ -665,15 +665,12 @@ static bool rspi_can_dma(struct spi_master *master, struct spi_device *spi, ...@@ -665,15 +665,12 @@ static bool rspi_can_dma(struct spi_master *master, struct spi_device *spi,
static int rspi_dma_check_then_transfer(struct rspi_data *rspi, static int rspi_dma_check_then_transfer(struct rspi_data *rspi,
struct spi_transfer *xfer) struct spi_transfer *xfer)
{ {
if (rspi->master->can_dma && __rspi_can_dma(rspi, xfer)) { if (!rspi->master->can_dma || !__rspi_can_dma(rspi, xfer))
/* rx_buf can be NULL on RSPI on SH in TX-only Mode */ return -EAGAIN;
int ret = rspi_dma_transfer(rspi, &xfer->tx_sg,
xfer->rx_buf ? &xfer->rx_sg : NULL);
if (ret != -EAGAIN)
return 0;
}
return -EAGAIN; /* rx_buf can be NULL on RSPI on SH in TX-only Mode */
return rspi_dma_transfer(rspi, &xfer->tx_sg,
xfer->rx_buf ? &xfer->rx_sg : NULL);
} }
static int rspi_common_transfer(struct rspi_data *rspi, static int rspi_common_transfer(struct rspi_data *rspi,
...@@ -724,7 +721,7 @@ static int rspi_rz_transfer_one(struct spi_master *master, ...@@ -724,7 +721,7 @@ static int rspi_rz_transfer_one(struct spi_master *master,
return rspi_common_transfer(rspi, xfer); return rspi_common_transfer(rspi, xfer);
} }
static int qspi_trigger_transfer_out_int(struct rspi_data *rspi, const u8 *tx, static int qspi_trigger_transfer_out_in(struct rspi_data *rspi, const u8 *tx,
u8 *rx, unsigned int len) u8 *rx, unsigned int len)
{ {
int i, n, ret; int i, n, ret;
...@@ -771,12 +768,8 @@ static int qspi_transfer_out_in(struct rspi_data *rspi, ...@@ -771,12 +768,8 @@ static int qspi_transfer_out_in(struct rspi_data *rspi,
if (ret != -EAGAIN) if (ret != -EAGAIN)
return ret; return ret;
ret = qspi_trigger_transfer_out_int(rspi, xfer->tx_buf, return qspi_trigger_transfer_out_in(rspi, xfer->tx_buf,
xfer->rx_buf, xfer->len); xfer->rx_buf, xfer->len);
if (ret < 0)
return ret;
return 0;
} }
static int qspi_transfer_out(struct rspi_data *rspi, struct spi_transfer *xfer) static int qspi_transfer_out(struct rspi_data *rspi, struct spi_transfer *xfer)
...@@ -1300,7 +1293,7 @@ static int rspi_probe(struct platform_device *pdev) ...@@ -1300,7 +1293,7 @@ static int rspi_probe(struct platform_device *pdev)
return ret; return ret;
} }
static struct platform_device_id spi_driver_ids[] = { static const struct platform_device_id spi_driver_ids[] = {
{ "rspi", (kernel_ulong_t)&rspi_ops }, { "rspi", (kernel_ulong_t)&rspi_ops },
{ "rspi-rz", (kernel_ulong_t)&rspi_rz_ops }, { "rspi-rz", (kernel_ulong_t)&rspi_rz_ops },
{ "qspi", (kernel_ulong_t)&qspi_ops }, { "qspi", (kernel_ulong_t)&qspi_ops },
......
...@@ -1347,7 +1347,7 @@ static struct s3c64xx_spi_port_config exynos7_spi_port_config = { ...@@ -1347,7 +1347,7 @@ static struct s3c64xx_spi_port_config exynos7_spi_port_config = {
.quirks = S3C64XX_SPI_QUIRK_CS_AUTO, .quirks = S3C64XX_SPI_QUIRK_CS_AUTO,
}; };
static struct platform_device_id s3c64xx_spi_driver_ids[] = { static const struct platform_device_id s3c64xx_spi_driver_ids[] = {
{ {
.name = "s3c2443-spi", .name = "s3c2443-spi",
.driver_data = (kernel_ulong_t)&s3c2443_spi_port_config, .driver_data = (kernel_ulong_t)&s3c2443_spi_port_config,
......
...@@ -1263,7 +1263,7 @@ static int sh_msiof_spi_remove(struct platform_device *pdev) ...@@ -1263,7 +1263,7 @@ static int sh_msiof_spi_remove(struct platform_device *pdev)
return 0; return 0;
} }
static struct platform_device_id spi_driver_ids[] = { static const struct platform_device_id spi_driver_ids[] = {
{ "spi_sh_msiof", (kernel_ulong_t)&sh_data }, { "spi_sh_msiof", (kernel_ulong_t)&sh_data },
{ "spi_r8a7790_msiof", (kernel_ulong_t)&r8a779x_data }, { "spi_r8a7790_msiof", (kernel_ulong_t)&r8a779x_data },
{ "spi_r8a7791_msiof", (kernel_ulong_t)&r8a779x_data }, { "spi_r8a7791_msiof", (kernel_ulong_t)&r8a779x_data },
......
...@@ -26,28 +26,6 @@ ...@@ -26,28 +26,6 @@
#include <linux/reset.h> #include <linux/reset.h>
#define DRIVER_NAME "sirfsoc_spi" #define DRIVER_NAME "sirfsoc_spi"
#define SIRFSOC_SPI_CTRL 0x0000
#define SIRFSOC_SPI_CMD 0x0004
#define SIRFSOC_SPI_TX_RX_EN 0x0008
#define SIRFSOC_SPI_INT_EN 0x000C
#define SIRFSOC_SPI_INT_STATUS 0x0010
#define SIRFSOC_SPI_TX_DMA_IO_CTRL 0x0100
#define SIRFSOC_SPI_TX_DMA_IO_LEN 0x0104
#define SIRFSOC_SPI_TXFIFO_CTRL 0x0108
#define SIRFSOC_SPI_TXFIFO_LEVEL_CHK 0x010C
#define SIRFSOC_SPI_TXFIFO_OP 0x0110
#define SIRFSOC_SPI_TXFIFO_STATUS 0x0114
#define SIRFSOC_SPI_TXFIFO_DATA 0x0118
#define SIRFSOC_SPI_RX_DMA_IO_CTRL 0x0120
#define SIRFSOC_SPI_RX_DMA_IO_LEN 0x0124
#define SIRFSOC_SPI_RXFIFO_CTRL 0x0128
#define SIRFSOC_SPI_RXFIFO_LEVEL_CHK 0x012C
#define SIRFSOC_SPI_RXFIFO_OP 0x0130
#define SIRFSOC_SPI_RXFIFO_STATUS 0x0134
#define SIRFSOC_SPI_RXFIFO_DATA 0x0138
#define SIRFSOC_SPI_DUMMY_DELAY_CTL 0x0144
/* SPI CTRL register defines */ /* SPI CTRL register defines */
#define SIRFSOC_SPI_SLV_MODE BIT(16) #define SIRFSOC_SPI_SLV_MODE BIT(16)
#define SIRFSOC_SPI_CMD_MODE BIT(17) #define SIRFSOC_SPI_CMD_MODE BIT(17)
...@@ -80,8 +58,6 @@ ...@@ -80,8 +58,6 @@
#define SIRFSOC_SPI_TXFIFO_THD_INT_EN BIT(9) #define SIRFSOC_SPI_TXFIFO_THD_INT_EN BIT(9)
#define SIRFSOC_SPI_FRM_END_INT_EN BIT(10) #define SIRFSOC_SPI_FRM_END_INT_EN BIT(10)
#define SIRFSOC_SPI_INT_MASK_ALL 0x1FFF
/* Interrupt status */ /* Interrupt status */
#define SIRFSOC_SPI_RX_DONE BIT(0) #define SIRFSOC_SPI_RX_DONE BIT(0)
#define SIRFSOC_SPI_TX_DONE BIT(1) #define SIRFSOC_SPI_TX_DONE BIT(1)
...@@ -110,20 +86,66 @@ ...@@ -110,20 +86,66 @@
#define SIRFSOC_SPI_FIFO_WIDTH_BYTE (0 << 0) #define SIRFSOC_SPI_FIFO_WIDTH_BYTE (0 << 0)
#define SIRFSOC_SPI_FIFO_WIDTH_WORD (1 << 0) #define SIRFSOC_SPI_FIFO_WIDTH_WORD (1 << 0)
#define SIRFSOC_SPI_FIFO_WIDTH_DWORD (2 << 0) #define SIRFSOC_SPI_FIFO_WIDTH_DWORD (2 << 0)
/* USP related */
/* FIFO Status */ #define SIRFSOC_USP_SYNC_MODE BIT(0)
#define SIRFSOC_SPI_FIFO_LEVEL_MASK 0xFF #define SIRFSOC_USP_SLV_MODE BIT(1)
#define SIRFSOC_SPI_FIFO_FULL BIT(8) #define SIRFSOC_USP_LSB BIT(4)
#define SIRFSOC_SPI_FIFO_EMPTY BIT(9) #define SIRFSOC_USP_EN BIT(5)
#define SIRFSOC_USP_RXD_FALLING_EDGE BIT(6)
/* 256 bytes rx/tx FIFO */ #define SIRFSOC_USP_TXD_FALLING_EDGE BIT(7)
#define SIRFSOC_SPI_FIFO_SIZE 256 #define SIRFSOC_USP_CS_HIGH_VALID BIT(9)
#define SIRFSOC_SPI_DAT_FRM_LEN_MAX (64 * 1024) #define SIRFSOC_USP_SCLK_IDLE_STAT BIT(11)
#define SIRFSOC_USP_TFS_IO_MODE BIT(14)
#define SIRFSOC_SPI_FIFO_SC(x) ((x) & 0x3F) #define SIRFSOC_USP_TFS_IO_INPUT BIT(19)
#define SIRFSOC_SPI_FIFO_LC(x) (((x) & 0x3F) << 10)
#define SIRFSOC_SPI_FIFO_HC(x) (((x) & 0x3F) << 20) #define SIRFSOC_USP_RXD_DELAY_LEN_MASK 0xFF
#define SIRFSOC_SPI_FIFO_THD(x) (((x) & 0xFF) << 2) #define SIRFSOC_USP_TXD_DELAY_LEN_MASK 0xFF
#define SIRFSOC_USP_RXD_DELAY_OFFSET 0
#define SIRFSOC_USP_TXD_DELAY_OFFSET 8
#define SIRFSOC_USP_RXD_DELAY_LEN 1
#define SIRFSOC_USP_TXD_DELAY_LEN 1
#define SIRFSOC_USP_CLK_DIVISOR_OFFSET 21
#define SIRFSOC_USP_CLK_DIVISOR_MASK 0x3FF
#define SIRFSOC_USP_CLK_10_11_MASK 0x3
#define SIRFSOC_USP_CLK_10_11_OFFSET 30
#define SIRFSOC_USP_CLK_12_15_MASK 0xF
#define SIRFSOC_USP_CLK_12_15_OFFSET 24
#define SIRFSOC_USP_TX_DATA_OFFSET 0
#define SIRFSOC_USP_TX_SYNC_OFFSET 8
#define SIRFSOC_USP_TX_FRAME_OFFSET 16
#define SIRFSOC_USP_TX_SHIFTER_OFFSET 24
#define SIRFSOC_USP_TX_DATA_MASK 0xFF
#define SIRFSOC_USP_TX_SYNC_MASK 0xFF
#define SIRFSOC_USP_TX_FRAME_MASK 0xFF
#define SIRFSOC_USP_TX_SHIFTER_MASK 0x1F
#define SIRFSOC_USP_RX_DATA_OFFSET 0
#define SIRFSOC_USP_RX_FRAME_OFFSET 8
#define SIRFSOC_USP_RX_SHIFTER_OFFSET 16
#define SIRFSOC_USP_RX_DATA_MASK 0xFF
#define SIRFSOC_USP_RX_FRAME_MASK 0xFF
#define SIRFSOC_USP_RX_SHIFTER_MASK 0x1F
#define SIRFSOC_USP_CS_HIGH_VALUE BIT(1)
#define SIRFSOC_SPI_FIFO_SC_OFFSET 0
#define SIRFSOC_SPI_FIFO_LC_OFFSET 10
#define SIRFSOC_SPI_FIFO_HC_OFFSET 20
#define SIRFSOC_SPI_FIFO_FULL_MASK(s) (1 << ((s)->fifo_full_offset))
#define SIRFSOC_SPI_FIFO_EMPTY_MASK(s) (1 << ((s)->fifo_full_offset + 1))
#define SIRFSOC_SPI_FIFO_THD_MASK(s) ((s)->fifo_size - 1)
#define SIRFSOC_SPI_FIFO_THD_OFFSET 2
#define SIRFSOC_SPI_FIFO_LEVEL_CHK_MASK(s, val) \
((val) & (s)->fifo_level_chk_mask)
enum sirf_spi_type {
SIRF_REAL_SPI,
SIRF_USP_SPI_P2,
SIRF_USP_SPI_A7,
};
/* /*
* only if the rx/tx buffer and transfer size are 4-bytes aligned, we use dma * only if the rx/tx buffer and transfer size are 4-bytes aligned, we use dma
...@@ -137,6 +159,95 @@ ...@@ -137,6 +159,95 @@
#define SIRFSOC_MAX_CMD_BYTES 4 #define SIRFSOC_MAX_CMD_BYTES 4
#define SIRFSOC_SPI_DEFAULT_FRQ 1000000 #define SIRFSOC_SPI_DEFAULT_FRQ 1000000
struct sirf_spi_register {
/*SPI and USP-SPI common*/
u32 tx_rx_en;
u32 int_en;
u32 int_st;
u32 tx_dma_io_ctrl;
u32 tx_dma_io_len;
u32 txfifo_ctrl;
u32 txfifo_level_chk;
u32 txfifo_op;
u32 txfifo_st;
u32 txfifo_data;
u32 rx_dma_io_ctrl;
u32 rx_dma_io_len;
u32 rxfifo_ctrl;
u32 rxfifo_level_chk;
u32 rxfifo_op;
u32 rxfifo_st;
u32 rxfifo_data;
/*SPI self*/
u32 spi_ctrl;
u32 spi_cmd;
u32 spi_dummy_delay_ctrl;
/*USP-SPI self*/
u32 usp_mode1;
u32 usp_mode2;
u32 usp_tx_frame_ctrl;
u32 usp_rx_frame_ctrl;
u32 usp_pin_io_data;
u32 usp_risc_dsp_mode;
u32 usp_async_param_reg;
u32 usp_irda_x_mode_div;
u32 usp_sm_cfg;
u32 usp_int_en_clr;
};
static const struct sirf_spi_register real_spi_register = {
.tx_rx_en = 0x8,
.int_en = 0xc,
.int_st = 0x10,
.tx_dma_io_ctrl = 0x100,
.tx_dma_io_len = 0x104,
.txfifo_ctrl = 0x108,
.txfifo_level_chk = 0x10c,
.txfifo_op = 0x110,
.txfifo_st = 0x114,
.txfifo_data = 0x118,
.rx_dma_io_ctrl = 0x120,
.rx_dma_io_len = 0x124,
.rxfifo_ctrl = 0x128,
.rxfifo_level_chk = 0x12c,
.rxfifo_op = 0x130,
.rxfifo_st = 0x134,
.rxfifo_data = 0x138,
.spi_ctrl = 0x0,
.spi_cmd = 0x4,
.spi_dummy_delay_ctrl = 0x144,
};
static const struct sirf_spi_register usp_spi_register = {
.tx_rx_en = 0x10,
.int_en = 0x14,
.int_st = 0x18,
.tx_dma_io_ctrl = 0x100,
.tx_dma_io_len = 0x104,
.txfifo_ctrl = 0x108,
.txfifo_level_chk = 0x10c,
.txfifo_op = 0x110,
.txfifo_st = 0x114,
.txfifo_data = 0x118,
.rx_dma_io_ctrl = 0x120,
.rx_dma_io_len = 0x124,
.rxfifo_ctrl = 0x128,
.rxfifo_level_chk = 0x12c,
.rxfifo_op = 0x130,
.rxfifo_st = 0x134,
.rxfifo_data = 0x138,
.usp_mode1 = 0x0,
.usp_mode2 = 0x4,
.usp_tx_frame_ctrl = 0x8,
.usp_rx_frame_ctrl = 0xc,
.usp_pin_io_data = 0x1c,
.usp_risc_dsp_mode = 0x20,
.usp_async_param_reg = 0x24,
.usp_irda_x_mode_div = 0x28,
.usp_sm_cfg = 0x2c,
.usp_int_en_clr = 0x140,
};
struct sirfsoc_spi { struct sirfsoc_spi {
struct spi_bitbang bitbang; struct spi_bitbang bitbang;
struct completion rx_done; struct completion rx_done;
...@@ -164,7 +275,6 @@ struct sirfsoc_spi { ...@@ -164,7 +275,6 @@ struct sirfsoc_spi {
struct dma_chan *tx_chan; struct dma_chan *tx_chan;
dma_addr_t src_start; dma_addr_t src_start;
dma_addr_t dst_start; dma_addr_t dst_start;
void *dummypage;
int word_width; /* in bytes */ int word_width; /* in bytes */
/* /*
...@@ -173,14 +283,39 @@ struct sirfsoc_spi { ...@@ -173,14 +283,39 @@ struct sirfsoc_spi {
*/ */
bool tx_by_cmd; bool tx_by_cmd;
bool hw_cs; bool hw_cs;
enum sirf_spi_type type;
const struct sirf_spi_register *regs;
unsigned int fifo_size;
/* fifo empty offset is (fifo full offset + 1)*/
unsigned int fifo_full_offset;
/* fifo_level_chk_mask is (fifo_size/4 - 1) */
unsigned int fifo_level_chk_mask;
unsigned int dat_max_frm_len;
};
struct sirf_spi_comp_data {
const struct sirf_spi_register *regs;
enum sirf_spi_type type;
unsigned int dat_max_frm_len;
unsigned int fifo_size;
void (*hwinit)(struct sirfsoc_spi *sspi);
}; };
static void sirfsoc_usp_hwinit(struct sirfsoc_spi *sspi)
{
/* reset USP and let USP can operate */
writel(readl(sspi->base + sspi->regs->usp_mode1) &
~SIRFSOC_USP_EN, sspi->base + sspi->regs->usp_mode1);
writel(readl(sspi->base + sspi->regs->usp_mode1) |
SIRFSOC_USP_EN, sspi->base + sspi->regs->usp_mode1);
}
static void spi_sirfsoc_rx_word_u8(struct sirfsoc_spi *sspi) static void spi_sirfsoc_rx_word_u8(struct sirfsoc_spi *sspi)
{ {
u32 data; u32 data;
u8 *rx = sspi->rx; u8 *rx = sspi->rx;
data = readl(sspi->base + SIRFSOC_SPI_RXFIFO_DATA); data = readl(sspi->base + sspi->regs->rxfifo_data);
if (rx) { if (rx) {
*rx++ = (u8) data; *rx++ = (u8) data;
...@@ -199,8 +334,7 @@ static void spi_sirfsoc_tx_word_u8(struct sirfsoc_spi *sspi) ...@@ -199,8 +334,7 @@ static void spi_sirfsoc_tx_word_u8(struct sirfsoc_spi *sspi)
data = *tx++; data = *tx++;
sspi->tx = tx; sspi->tx = tx;
} }
writel(data, sspi->base + sspi->regs->txfifo_data);
writel(data, sspi->base + SIRFSOC_SPI_TXFIFO_DATA);
sspi->left_tx_word--; sspi->left_tx_word--;
} }
...@@ -209,7 +343,7 @@ static void spi_sirfsoc_rx_word_u16(struct sirfsoc_spi *sspi) ...@@ -209,7 +343,7 @@ static void spi_sirfsoc_rx_word_u16(struct sirfsoc_spi *sspi)
u32 data; u32 data;
u16 *rx = sspi->rx; u16 *rx = sspi->rx;
data = readl(sspi->base + SIRFSOC_SPI_RXFIFO_DATA); data = readl(sspi->base + sspi->regs->rxfifo_data);
if (rx) { if (rx) {
*rx++ = (u16) data; *rx++ = (u16) data;
...@@ -229,7 +363,7 @@ static void spi_sirfsoc_tx_word_u16(struct sirfsoc_spi *sspi) ...@@ -229,7 +363,7 @@ static void spi_sirfsoc_tx_word_u16(struct sirfsoc_spi *sspi)
sspi->tx = tx; sspi->tx = tx;
} }
writel(data, sspi->base + SIRFSOC_SPI_TXFIFO_DATA); writel(data, sspi->base + sspi->regs->txfifo_data);
sspi->left_tx_word--; sspi->left_tx_word--;
} }
...@@ -238,7 +372,7 @@ static void spi_sirfsoc_rx_word_u32(struct sirfsoc_spi *sspi) ...@@ -238,7 +372,7 @@ static void spi_sirfsoc_rx_word_u32(struct sirfsoc_spi *sspi)
u32 data; u32 data;
u32 *rx = sspi->rx; u32 *rx = sspi->rx;
data = readl(sspi->base + SIRFSOC_SPI_RXFIFO_DATA); data = readl(sspi->base + sspi->regs->rxfifo_data);
if (rx) { if (rx) {
*rx++ = (u32) data; *rx++ = (u32) data;
...@@ -259,41 +393,59 @@ static void spi_sirfsoc_tx_word_u32(struct sirfsoc_spi *sspi) ...@@ -259,41 +393,59 @@ static void spi_sirfsoc_tx_word_u32(struct sirfsoc_spi *sspi)
sspi->tx = tx; sspi->tx = tx;
} }
writel(data, sspi->base + SIRFSOC_SPI_TXFIFO_DATA); writel(data, sspi->base + sspi->regs->txfifo_data);
sspi->left_tx_word--; sspi->left_tx_word--;
} }
static irqreturn_t spi_sirfsoc_irq(int irq, void *dev_id) static irqreturn_t spi_sirfsoc_irq(int irq, void *dev_id)
{ {
struct sirfsoc_spi *sspi = dev_id; struct sirfsoc_spi *sspi = dev_id;
u32 spi_stat = readl(sspi->base + SIRFSOC_SPI_INT_STATUS); u32 spi_stat;
if (sspi->tx_by_cmd && (spi_stat & SIRFSOC_SPI_FRM_END)) {
spi_stat = readl(sspi->base + sspi->regs->int_st);
if (sspi->tx_by_cmd && sspi->type == SIRF_REAL_SPI
&& (spi_stat & SIRFSOC_SPI_FRM_END)) {
complete(&sspi->tx_done); complete(&sspi->tx_done);
writel(0x0, sspi->base + SIRFSOC_SPI_INT_EN); writel(0x0, sspi->base + sspi->regs->int_en);
writel(SIRFSOC_SPI_INT_MASK_ALL, writel(readl(sspi->base + sspi->regs->int_st),
sspi->base + SIRFSOC_SPI_INT_STATUS); sspi->base + sspi->regs->int_st);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
/* Error Conditions */ /* Error Conditions */
if (spi_stat & SIRFSOC_SPI_RX_OFLOW || if (spi_stat & SIRFSOC_SPI_RX_OFLOW ||
spi_stat & SIRFSOC_SPI_TX_UFLOW) { spi_stat & SIRFSOC_SPI_TX_UFLOW) {
complete(&sspi->tx_done); complete(&sspi->tx_done);
complete(&sspi->rx_done); complete(&sspi->rx_done);
writel(0x0, sspi->base + SIRFSOC_SPI_INT_EN); switch (sspi->type) {
writel(SIRFSOC_SPI_INT_MASK_ALL, case SIRF_REAL_SPI:
sspi->base + SIRFSOC_SPI_INT_STATUS); case SIRF_USP_SPI_P2:
writel(0x0, sspi->base + sspi->regs->int_en);
break;
case SIRF_USP_SPI_A7:
writel(~0UL, sspi->base + sspi->regs->usp_int_en_clr);
break;
}
writel(readl(sspi->base + sspi->regs->int_st),
sspi->base + sspi->regs->int_st);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
if (spi_stat & SIRFSOC_SPI_TXFIFO_EMPTY) if (spi_stat & SIRFSOC_SPI_TXFIFO_EMPTY)
complete(&sspi->tx_done); complete(&sspi->tx_done);
while (!(readl(sspi->base + SIRFSOC_SPI_INT_STATUS) & while (!(readl(sspi->base + sspi->regs->int_st) &
SIRFSOC_SPI_RX_IO_DMA)) SIRFSOC_SPI_RX_IO_DMA))
cpu_relax(); cpu_relax();
complete(&sspi->rx_done); complete(&sspi->rx_done);
writel(0x0, sspi->base + SIRFSOC_SPI_INT_EN); switch (sspi->type) {
writel(SIRFSOC_SPI_INT_MASK_ALL, case SIRF_REAL_SPI:
sspi->base + SIRFSOC_SPI_INT_STATUS); case SIRF_USP_SPI_P2:
writel(0x0, sspi->base + sspi->regs->int_en);
break;
case SIRF_USP_SPI_A7:
writel(~0UL, sspi->base + sspi->regs->usp_int_en_clr);
break;
}
writel(readl(sspi->base + sspi->regs->int_st),
sspi->base + sspi->regs->int_st);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
...@@ -313,8 +465,8 @@ static void spi_sirfsoc_cmd_transfer(struct spi_device *spi, ...@@ -313,8 +465,8 @@ static void spi_sirfsoc_cmd_transfer(struct spi_device *spi,
u32 cmd; u32 cmd;
sspi = spi_master_get_devdata(spi->master); sspi = spi_master_get_devdata(spi->master);
writel(SIRFSOC_SPI_FIFO_RESET, sspi->base + SIRFSOC_SPI_TXFIFO_OP); writel(SIRFSOC_SPI_FIFO_RESET, sspi->base + sspi->regs->txfifo_op);
writel(SIRFSOC_SPI_FIFO_START, sspi->base + SIRFSOC_SPI_TXFIFO_OP); writel(SIRFSOC_SPI_FIFO_START, sspi->base + sspi->regs->txfifo_op);
memcpy(&cmd, sspi->tx, t->len); memcpy(&cmd, sspi->tx, t->len);
if (sspi->word_width == 1 && !(spi->mode & SPI_LSB_FIRST)) if (sspi->word_width == 1 && !(spi->mode & SPI_LSB_FIRST))
cmd = cpu_to_be32(cmd) >> cmd = cpu_to_be32(cmd) >>
...@@ -322,11 +474,11 @@ static void spi_sirfsoc_cmd_transfer(struct spi_device *spi, ...@@ -322,11 +474,11 @@ static void spi_sirfsoc_cmd_transfer(struct spi_device *spi,
if (sspi->word_width == 2 && t->len == 4 && if (sspi->word_width == 2 && t->len == 4 &&
(!(spi->mode & SPI_LSB_FIRST))) (!(spi->mode & SPI_LSB_FIRST)))
cmd = ((cmd & 0xffff) << 16) | (cmd >> 16); cmd = ((cmd & 0xffff) << 16) | (cmd >> 16);
writel(cmd, sspi->base + SIRFSOC_SPI_CMD); writel(cmd, sspi->base + sspi->regs->spi_cmd);
writel(SIRFSOC_SPI_FRM_END_INT_EN, writel(SIRFSOC_SPI_FRM_END_INT_EN,
sspi->base + SIRFSOC_SPI_INT_EN); sspi->base + sspi->regs->int_en);
writel(SIRFSOC_SPI_CMD_TX_EN, writel(SIRFSOC_SPI_CMD_TX_EN,
sspi->base + SIRFSOC_SPI_TX_RX_EN); sspi->base + sspi->regs->tx_rx_en);
if (wait_for_completion_timeout(&sspi->tx_done, timeout) == 0) { if (wait_for_completion_timeout(&sspi->tx_done, timeout) == 0) {
dev_err(&spi->dev, "cmd transfer timeout\n"); dev_err(&spi->dev, "cmd transfer timeout\n");
return; return;
...@@ -342,25 +494,56 @@ static void spi_sirfsoc_dma_transfer(struct spi_device *spi, ...@@ -342,25 +494,56 @@ static void spi_sirfsoc_dma_transfer(struct spi_device *spi,
int timeout = t->len * 10; int timeout = t->len * 10;
sspi = spi_master_get_devdata(spi->master); sspi = spi_master_get_devdata(spi->master);
writel(SIRFSOC_SPI_FIFO_RESET, sspi->base + SIRFSOC_SPI_RXFIFO_OP); writel(SIRFSOC_SPI_FIFO_RESET, sspi->base + sspi->regs->rxfifo_op);
writel(SIRFSOC_SPI_FIFO_RESET, sspi->base + SIRFSOC_SPI_TXFIFO_OP); writel(SIRFSOC_SPI_FIFO_RESET, sspi->base + sspi->regs->txfifo_op);
writel(SIRFSOC_SPI_FIFO_START, sspi->base + SIRFSOC_SPI_RXFIFO_OP); switch (sspi->type) {
writel(SIRFSOC_SPI_FIFO_START, sspi->base + SIRFSOC_SPI_TXFIFO_OP); case SIRF_REAL_SPI:
writel(0, sspi->base + SIRFSOC_SPI_INT_EN); writel(SIRFSOC_SPI_FIFO_START,
writel(SIRFSOC_SPI_INT_MASK_ALL, sspi->base + SIRFSOC_SPI_INT_STATUS); sspi->base + sspi->regs->rxfifo_op);
if (sspi->left_tx_word < SIRFSOC_SPI_DAT_FRM_LEN_MAX) { writel(SIRFSOC_SPI_FIFO_START,
writel(readl(sspi->base + SIRFSOC_SPI_CTRL) | sspi->base + sspi->regs->txfifo_op);
SIRFSOC_SPI_ENA_AUTO_CLR | SIRFSOC_SPI_MUL_DAT_MODE, writel(0, sspi->base + sspi->regs->int_en);
sspi->base + SIRFSOC_SPI_CTRL); break;
writel(sspi->left_tx_word - 1, case SIRF_USP_SPI_P2:
sspi->base + SIRFSOC_SPI_TX_DMA_IO_LEN); writel(0x0, sspi->base + sspi->regs->rxfifo_op);
writel(sspi->left_tx_word - 1, writel(0x0, sspi->base + sspi->regs->txfifo_op);
sspi->base + SIRFSOC_SPI_RX_DMA_IO_LEN); writel(0, sspi->base + sspi->regs->int_en);
break;
case SIRF_USP_SPI_A7:
writel(0x0, sspi->base + sspi->regs->rxfifo_op);
writel(0x0, sspi->base + sspi->regs->txfifo_op);
writel(~0UL, sspi->base + sspi->regs->usp_int_en_clr);
break;
}
writel(readl(sspi->base + sspi->regs->int_st),
sspi->base + sspi->regs->int_st);
if (sspi->left_tx_word < sspi->dat_max_frm_len) {
switch (sspi->type) {
case SIRF_REAL_SPI:
writel(readl(sspi->base + sspi->regs->spi_ctrl) |
SIRFSOC_SPI_ENA_AUTO_CLR |
SIRFSOC_SPI_MUL_DAT_MODE,
sspi->base + sspi->regs->spi_ctrl);
writel(sspi->left_tx_word - 1,
sspi->base + sspi->regs->tx_dma_io_len);
writel(sspi->left_tx_word - 1,
sspi->base + sspi->regs->rx_dma_io_len);
break;
case SIRF_USP_SPI_P2:
case SIRF_USP_SPI_A7:
/*USP simulate SPI, tx/rx_dma_io_len indicates bytes*/
writel(sspi->left_tx_word * sspi->word_width,
sspi->base + sspi->regs->tx_dma_io_len);
writel(sspi->left_tx_word * sspi->word_width,
sspi->base + sspi->regs->rx_dma_io_len);
break;
}
} else { } else {
writel(readl(sspi->base + SIRFSOC_SPI_CTRL), if (sspi->type == SIRF_REAL_SPI)
sspi->base + SIRFSOC_SPI_CTRL); writel(readl(sspi->base + sspi->regs->spi_ctrl),
writel(0, sspi->base + SIRFSOC_SPI_TX_DMA_IO_LEN); sspi->base + sspi->regs->spi_ctrl);
writel(0, sspi->base + SIRFSOC_SPI_RX_DMA_IO_LEN); writel(0, sspi->base + sspi->regs->tx_dma_io_len);
writel(0, sspi->base + sspi->regs->rx_dma_io_len);
} }
sspi->dst_start = dma_map_single(&spi->dev, sspi->rx, t->len, sspi->dst_start = dma_map_single(&spi->dev, sspi->rx, t->len,
(t->tx_buf != t->rx_buf) ? (t->tx_buf != t->rx_buf) ?
...@@ -385,7 +568,14 @@ static void spi_sirfsoc_dma_transfer(struct spi_device *spi, ...@@ -385,7 +568,14 @@ static void spi_sirfsoc_dma_transfer(struct spi_device *spi,
dma_async_issue_pending(sspi->tx_chan); dma_async_issue_pending(sspi->tx_chan);
dma_async_issue_pending(sspi->rx_chan); dma_async_issue_pending(sspi->rx_chan);
writel(SIRFSOC_SPI_RX_EN | SIRFSOC_SPI_TX_EN, writel(SIRFSOC_SPI_RX_EN | SIRFSOC_SPI_TX_EN,
sspi->base + SIRFSOC_SPI_TX_RX_EN); sspi->base + sspi->regs->tx_rx_en);
if (sspi->type == SIRF_USP_SPI_P2 ||
sspi->type == SIRF_USP_SPI_A7) {
writel(SIRFSOC_SPI_FIFO_START,
sspi->base + sspi->regs->rxfifo_op);
writel(SIRFSOC_SPI_FIFO_START,
sspi->base + sspi->regs->txfifo_op);
}
if (wait_for_completion_timeout(&sspi->rx_done, timeout) == 0) { if (wait_for_completion_timeout(&sspi->rx_done, timeout) == 0) {
dev_err(&spi->dev, "transfer timeout\n"); dev_err(&spi->dev, "transfer timeout\n");
dmaengine_terminate_all(sspi->rx_chan); dmaengine_terminate_all(sspi->rx_chan);
...@@ -398,15 +588,21 @@ static void spi_sirfsoc_dma_transfer(struct spi_device *spi, ...@@ -398,15 +588,21 @@ static void spi_sirfsoc_dma_transfer(struct spi_device *spi,
*/ */
if (wait_for_completion_timeout(&sspi->tx_done, timeout) == 0) { if (wait_for_completion_timeout(&sspi->tx_done, timeout) == 0) {
dev_err(&spi->dev, "transfer timeout\n"); dev_err(&spi->dev, "transfer timeout\n");
if (sspi->type == SIRF_USP_SPI_P2 ||
sspi->type == SIRF_USP_SPI_A7)
writel(0, sspi->base + sspi->regs->tx_rx_en);
dmaengine_terminate_all(sspi->tx_chan); dmaengine_terminate_all(sspi->tx_chan);
} }
dma_unmap_single(&spi->dev, sspi->src_start, t->len, DMA_TO_DEVICE); dma_unmap_single(&spi->dev, sspi->src_start, t->len, DMA_TO_DEVICE);
dma_unmap_single(&spi->dev, sspi->dst_start, t->len, DMA_FROM_DEVICE); dma_unmap_single(&spi->dev, sspi->dst_start, t->len, DMA_FROM_DEVICE);
/* TX, RX FIFO stop */ /* TX, RX FIFO stop */
writel(0, sspi->base + SIRFSOC_SPI_RXFIFO_OP); writel(0, sspi->base + sspi->regs->rxfifo_op);
writel(0, sspi->base + SIRFSOC_SPI_TXFIFO_OP); writel(0, sspi->base + sspi->regs->txfifo_op);
if (sspi->left_tx_word >= SIRFSOC_SPI_DAT_FRM_LEN_MAX) if (sspi->left_tx_word >= sspi->dat_max_frm_len)
writel(0, sspi->base + SIRFSOC_SPI_TX_RX_EN); writel(0, sspi->base + sspi->regs->tx_rx_en);
if (sspi->type == SIRF_USP_SPI_P2 ||
sspi->type == SIRF_USP_SPI_A7)
writel(0, sspi->base + sspi->regs->tx_rx_en);
} }
static void spi_sirfsoc_pio_transfer(struct spi_device *spi, static void spi_sirfsoc_pio_transfer(struct spi_device *spi,
...@@ -414,57 +610,105 @@ static void spi_sirfsoc_pio_transfer(struct spi_device *spi, ...@@ -414,57 +610,105 @@ static void spi_sirfsoc_pio_transfer(struct spi_device *spi,
{ {
struct sirfsoc_spi *sspi; struct sirfsoc_spi *sspi;
int timeout = t->len * 10; int timeout = t->len * 10;
unsigned int data_units;
sspi = spi_master_get_devdata(spi->master); sspi = spi_master_get_devdata(spi->master);
do { do {
writel(SIRFSOC_SPI_FIFO_RESET, writel(SIRFSOC_SPI_FIFO_RESET,
sspi->base + SIRFSOC_SPI_RXFIFO_OP); sspi->base + sspi->regs->rxfifo_op);
writel(SIRFSOC_SPI_FIFO_RESET, writel(SIRFSOC_SPI_FIFO_RESET,
sspi->base + SIRFSOC_SPI_TXFIFO_OP); sspi->base + sspi->regs->txfifo_op);
writel(SIRFSOC_SPI_FIFO_START, switch (sspi->type) {
sspi->base + SIRFSOC_SPI_RXFIFO_OP); case SIRF_USP_SPI_P2:
writel(SIRFSOC_SPI_FIFO_START, writel(0x0, sspi->base + sspi->regs->rxfifo_op);
sspi->base + SIRFSOC_SPI_TXFIFO_OP); writel(0x0, sspi->base + sspi->regs->txfifo_op);
writel(0, sspi->base + SIRFSOC_SPI_INT_EN); writel(0, sspi->base + sspi->regs->int_en);
writel(SIRFSOC_SPI_INT_MASK_ALL, writel(readl(sspi->base + sspi->regs->int_st),
sspi->base + SIRFSOC_SPI_INT_STATUS); sspi->base + sspi->regs->int_st);
writel(readl(sspi->base + SIRFSOC_SPI_CTRL) | writel(min((sspi->left_tx_word * sspi->word_width),
SIRFSOC_SPI_MUL_DAT_MODE | SIRFSOC_SPI_ENA_AUTO_CLR, sspi->fifo_size),
sspi->base + SIRFSOC_SPI_CTRL); sspi->base + sspi->regs->tx_dma_io_len);
writel(min(sspi->left_tx_word, (u32)(256 / sspi->word_width)) writel(min((sspi->left_rx_word * sspi->word_width),
- 1, sspi->base + SIRFSOC_SPI_TX_DMA_IO_LEN); sspi->fifo_size),
writel(min(sspi->left_rx_word, (u32)(256 / sspi->word_width)) sspi->base + sspi->regs->rx_dma_io_len);
- 1, sspi->base + SIRFSOC_SPI_RX_DMA_IO_LEN); break;
while (!((readl(sspi->base + SIRFSOC_SPI_TXFIFO_STATUS) case SIRF_USP_SPI_A7:
& SIRFSOC_SPI_FIFO_FULL)) && sspi->left_tx_word) writel(0x0, sspi->base + sspi->regs->rxfifo_op);
writel(0x0, sspi->base + sspi->regs->txfifo_op);
writel(~0UL, sspi->base + sspi->regs->usp_int_en_clr);
writel(readl(sspi->base + sspi->regs->int_st),
sspi->base + sspi->regs->int_st);
writel(min((sspi->left_tx_word * sspi->word_width),
sspi->fifo_size),
sspi->base + sspi->regs->tx_dma_io_len);
writel(min((sspi->left_rx_word * sspi->word_width),
sspi->fifo_size),
sspi->base + sspi->regs->rx_dma_io_len);
break;
case SIRF_REAL_SPI:
writel(SIRFSOC_SPI_FIFO_START,
sspi->base + sspi->regs->rxfifo_op);
writel(SIRFSOC_SPI_FIFO_START,
sspi->base + sspi->regs->txfifo_op);
writel(0, sspi->base + sspi->regs->int_en);
writel(readl(sspi->base + sspi->regs->int_st),
sspi->base + sspi->regs->int_st);
writel(readl(sspi->base + sspi->regs->spi_ctrl) |
SIRFSOC_SPI_MUL_DAT_MODE |
SIRFSOC_SPI_ENA_AUTO_CLR,
sspi->base + sspi->regs->spi_ctrl);
data_units = sspi->fifo_size / sspi->word_width;
writel(min(sspi->left_tx_word, data_units) - 1,
sspi->base + sspi->regs->tx_dma_io_len);
writel(min(sspi->left_rx_word, data_units) - 1,
sspi->base + sspi->regs->rx_dma_io_len);
break;
}
while (!((readl(sspi->base + sspi->regs->txfifo_st)
& SIRFSOC_SPI_FIFO_FULL_MASK(sspi))) &&
sspi->left_tx_word)
sspi->tx_word(sspi); sspi->tx_word(sspi);
writel(SIRFSOC_SPI_TXFIFO_EMPTY_INT_EN | writel(SIRFSOC_SPI_TXFIFO_EMPTY_INT_EN |
SIRFSOC_SPI_TX_UFLOW_INT_EN | SIRFSOC_SPI_TX_UFLOW_INT_EN |
SIRFSOC_SPI_RX_OFLOW_INT_EN | SIRFSOC_SPI_RX_OFLOW_INT_EN |
SIRFSOC_SPI_RX_IO_DMA_INT_EN, SIRFSOC_SPI_RX_IO_DMA_INT_EN,
sspi->base + SIRFSOC_SPI_INT_EN); sspi->base + sspi->regs->int_en);
writel(SIRFSOC_SPI_RX_EN | SIRFSOC_SPI_TX_EN, writel(SIRFSOC_SPI_RX_EN | SIRFSOC_SPI_TX_EN,
sspi->base + SIRFSOC_SPI_TX_RX_EN); sspi->base + sspi->regs->tx_rx_en);
if (sspi->type == SIRF_USP_SPI_P2 ||
sspi->type == SIRF_USP_SPI_A7) {
writel(SIRFSOC_SPI_FIFO_START,
sspi->base + sspi->regs->rxfifo_op);
writel(SIRFSOC_SPI_FIFO_START,
sspi->base + sspi->regs->txfifo_op);
}
if (!wait_for_completion_timeout(&sspi->tx_done, timeout) || if (!wait_for_completion_timeout(&sspi->tx_done, timeout) ||
!wait_for_completion_timeout(&sspi->rx_done, timeout)) { !wait_for_completion_timeout(&sspi->rx_done, timeout)) {
dev_err(&spi->dev, "transfer timeout\n"); dev_err(&spi->dev, "transfer timeout\n");
if (sspi->type == SIRF_USP_SPI_P2 ||
sspi->type == SIRF_USP_SPI_A7)
writel(0, sspi->base + sspi->regs->tx_rx_en);
break; break;
} }
while (!((readl(sspi->base + SIRFSOC_SPI_RXFIFO_STATUS) while (!((readl(sspi->base + sspi->regs->rxfifo_st)
& SIRFSOC_SPI_FIFO_EMPTY)) && sspi->left_rx_word) & SIRFSOC_SPI_FIFO_EMPTY_MASK(sspi))) &&
sspi->left_rx_word)
sspi->rx_word(sspi); sspi->rx_word(sspi);
writel(0, sspi->base + SIRFSOC_SPI_RXFIFO_OP); if (sspi->type == SIRF_USP_SPI_P2 ||
writel(0, sspi->base + SIRFSOC_SPI_TXFIFO_OP); sspi->type == SIRF_USP_SPI_A7)
writel(0, sspi->base + sspi->regs->tx_rx_en);
writel(0, sspi->base + sspi->regs->rxfifo_op);
writel(0, sspi->base + sspi->regs->txfifo_op);
} while (sspi->left_tx_word != 0 || sspi->left_rx_word != 0); } while (sspi->left_tx_word != 0 || sspi->left_rx_word != 0);
} }
static int spi_sirfsoc_transfer(struct spi_device *spi, struct spi_transfer *t) static int spi_sirfsoc_transfer(struct spi_device *spi, struct spi_transfer *t)
{ {
struct sirfsoc_spi *sspi; struct sirfsoc_spi *sspi;
sspi = spi_master_get_devdata(spi->master);
sspi->tx = t->tx_buf ? t->tx_buf : sspi->dummypage; sspi = spi_master_get_devdata(spi->master);
sspi->rx = t->rx_buf ? t->rx_buf : sspi->dummypage; sspi->tx = t->tx_buf;
sspi->rx = t->rx_buf;
sspi->left_tx_word = sspi->left_rx_word = t->len / sspi->word_width; sspi->left_tx_word = sspi->left_rx_word = t->len / sspi->word_width;
reinit_completion(&sspi->rx_done); reinit_completion(&sspi->rx_done);
reinit_completion(&sspi->tx_done); reinit_completion(&sspi->tx_done);
...@@ -473,7 +717,7 @@ static int spi_sirfsoc_transfer(struct spi_device *spi, struct spi_transfer *t) ...@@ -473,7 +717,7 @@ static int spi_sirfsoc_transfer(struct spi_device *spi, struct spi_transfer *t)
* null, just fill command data into command register and wait for its * null, just fill command data into command register and wait for its
* completion. * completion.
*/ */
if (sspi->tx_by_cmd) if (sspi->type == SIRF_REAL_SPI && sspi->tx_by_cmd)
spi_sirfsoc_cmd_transfer(spi, t); spi_sirfsoc_cmd_transfer(spi, t);
else if (IS_DMA_VALID(t)) else if (IS_DMA_VALID(t))
spi_sirfsoc_dma_transfer(spi, t); spi_sirfsoc_dma_transfer(spi, t);
...@@ -488,22 +732,49 @@ static void spi_sirfsoc_chipselect(struct spi_device *spi, int value) ...@@ -488,22 +732,49 @@ static void spi_sirfsoc_chipselect(struct spi_device *spi, int value)
struct sirfsoc_spi *sspi = spi_master_get_devdata(spi->master); struct sirfsoc_spi *sspi = spi_master_get_devdata(spi->master);
if (sspi->hw_cs) { if (sspi->hw_cs) {
u32 regval = readl(sspi->base + SIRFSOC_SPI_CTRL); u32 regval;
switch (value) {
case BITBANG_CS_ACTIVE: switch (sspi->type) {
if (spi->mode & SPI_CS_HIGH) case SIRF_REAL_SPI:
regval |= SIRFSOC_SPI_CS_IO_OUT; regval = readl(sspi->base + sspi->regs->spi_ctrl);
else switch (value) {
regval &= ~SIRFSOC_SPI_CS_IO_OUT; case BITBANG_CS_ACTIVE:
if (spi->mode & SPI_CS_HIGH)
regval |= SIRFSOC_SPI_CS_IO_OUT;
else
regval &= ~SIRFSOC_SPI_CS_IO_OUT;
break;
case BITBANG_CS_INACTIVE:
if (spi->mode & SPI_CS_HIGH)
regval &= ~SIRFSOC_SPI_CS_IO_OUT;
else
regval |= SIRFSOC_SPI_CS_IO_OUT;
break;
}
writel(regval, sspi->base + sspi->regs->spi_ctrl);
break; break;
case BITBANG_CS_INACTIVE: case SIRF_USP_SPI_P2:
if (spi->mode & SPI_CS_HIGH) case SIRF_USP_SPI_A7:
regval &= ~SIRFSOC_SPI_CS_IO_OUT; regval = readl(sspi->base +
else sspi->regs->usp_pin_io_data);
regval |= SIRFSOC_SPI_CS_IO_OUT; switch (value) {
case BITBANG_CS_ACTIVE:
if (spi->mode & SPI_CS_HIGH)
regval |= SIRFSOC_USP_CS_HIGH_VALUE;
else
regval &= ~(SIRFSOC_USP_CS_HIGH_VALUE);
break;
case BITBANG_CS_INACTIVE:
if (spi->mode & SPI_CS_HIGH)
regval &= ~(SIRFSOC_USP_CS_HIGH_VALUE);
else
regval |= SIRFSOC_USP_CS_HIGH_VALUE;
break;
}
writel(regval,
sspi->base + sspi->regs->usp_pin_io_data);
break; break;
} }
writel(regval, sspi->base + SIRFSOC_SPI_CTRL);
} else { } else {
switch (value) { switch (value) {
case BITBANG_CS_ACTIVE: case BITBANG_CS_ACTIVE:
...@@ -518,27 +789,102 @@ static void spi_sirfsoc_chipselect(struct spi_device *spi, int value) ...@@ -518,27 +789,102 @@ static void spi_sirfsoc_chipselect(struct spi_device *spi, int value)
} }
} }
static int spi_sirfsoc_config_mode(struct spi_device *spi)
{
struct sirfsoc_spi *sspi;
u32 regval, usp_mode1;
sspi = spi_master_get_devdata(spi->master);
regval = readl(sspi->base + sspi->regs->spi_ctrl);
usp_mode1 = readl(sspi->base + sspi->regs->usp_mode1);
if (!(spi->mode & SPI_CS_HIGH)) {
regval |= SIRFSOC_SPI_CS_IDLE_STAT;
usp_mode1 &= ~SIRFSOC_USP_CS_HIGH_VALID;
} else {
regval &= ~SIRFSOC_SPI_CS_IDLE_STAT;
usp_mode1 |= SIRFSOC_USP_CS_HIGH_VALID;
}
if (!(spi->mode & SPI_LSB_FIRST)) {
regval |= SIRFSOC_SPI_TRAN_MSB;
usp_mode1 &= ~SIRFSOC_USP_LSB;
} else {
regval &= ~SIRFSOC_SPI_TRAN_MSB;
usp_mode1 |= SIRFSOC_USP_LSB;
}
if (spi->mode & SPI_CPOL) {
regval |= SIRFSOC_SPI_CLK_IDLE_STAT;
usp_mode1 |= SIRFSOC_USP_SCLK_IDLE_STAT;
} else {
regval &= ~SIRFSOC_SPI_CLK_IDLE_STAT;
usp_mode1 &= ~SIRFSOC_USP_SCLK_IDLE_STAT;
}
/*
* Data should be driven at least 1/2 cycle before the fetch edge
* to make sure that data gets stable at the fetch edge.
*/
if (((spi->mode & SPI_CPOL) && (spi->mode & SPI_CPHA)) ||
(!(spi->mode & SPI_CPOL) && !(spi->mode & SPI_CPHA))) {
regval &= ~SIRFSOC_SPI_DRV_POS_EDGE;
usp_mode1 |= (SIRFSOC_USP_TXD_FALLING_EDGE |
SIRFSOC_USP_RXD_FALLING_EDGE);
} else {
regval |= SIRFSOC_SPI_DRV_POS_EDGE;
usp_mode1 &= ~(SIRFSOC_USP_RXD_FALLING_EDGE |
SIRFSOC_USP_TXD_FALLING_EDGE);
}
writel((SIRFSOC_SPI_FIFO_LEVEL_CHK_MASK(sspi, sspi->fifo_size - 2) <<
SIRFSOC_SPI_FIFO_SC_OFFSET) |
(SIRFSOC_SPI_FIFO_LEVEL_CHK_MASK(sspi, sspi->fifo_size / 2) <<
SIRFSOC_SPI_FIFO_LC_OFFSET) |
(SIRFSOC_SPI_FIFO_LEVEL_CHK_MASK(sspi, 2) <<
SIRFSOC_SPI_FIFO_HC_OFFSET),
sspi->base + sspi->regs->txfifo_level_chk);
writel((SIRFSOC_SPI_FIFO_LEVEL_CHK_MASK(sspi, 2) <<
SIRFSOC_SPI_FIFO_SC_OFFSET) |
(SIRFSOC_SPI_FIFO_LEVEL_CHK_MASK(sspi, sspi->fifo_size / 2) <<
SIRFSOC_SPI_FIFO_LC_OFFSET) |
(SIRFSOC_SPI_FIFO_LEVEL_CHK_MASK(sspi, sspi->fifo_size - 2) <<
SIRFSOC_SPI_FIFO_HC_OFFSET),
sspi->base + sspi->regs->rxfifo_level_chk);
/*
* it should never set to hardware cs mode because in hardware cs mode,
* cs signal can't controlled by driver.
*/
switch (sspi->type) {
case SIRF_REAL_SPI:
regval |= SIRFSOC_SPI_CS_IO_MODE;
writel(regval, sspi->base + sspi->regs->spi_ctrl);
break;
case SIRF_USP_SPI_P2:
case SIRF_USP_SPI_A7:
usp_mode1 |= SIRFSOC_USP_SYNC_MODE;
usp_mode1 |= SIRFSOC_USP_TFS_IO_MODE;
usp_mode1 &= ~SIRFSOC_USP_TFS_IO_INPUT;
writel(usp_mode1, sspi->base + sspi->regs->usp_mode1);
break;
}
return 0;
}
static int static int
spi_sirfsoc_setup_transfer(struct spi_device *spi, struct spi_transfer *t) spi_sirfsoc_setup_transfer(struct spi_device *spi, struct spi_transfer *t)
{ {
struct sirfsoc_spi *sspi; struct sirfsoc_spi *sspi;
u8 bits_per_word = 0; u8 bits_per_word = 0;
int hz = 0; int hz = 0;
u32 regval; u32 regval, txfifo_ctrl, rxfifo_ctrl, tx_frm_ctl, rx_frm_ctl, usp_mode2;
u32 txfifo_ctrl, rxfifo_ctrl;
u32 fifo_size = SIRFSOC_SPI_FIFO_SIZE / 4;
sspi = spi_master_get_devdata(spi->master); sspi = spi_master_get_devdata(spi->master);
bits_per_word = (t) ? t->bits_per_word : spi->bits_per_word; bits_per_word = (t) ? t->bits_per_word : spi->bits_per_word;
hz = t && t->speed_hz ? t->speed_hz : spi->max_speed_hz; hz = t && t->speed_hz ? t->speed_hz : spi->max_speed_hz;
regval = (sspi->ctrl_freq / (2 * hz)) - 1; usp_mode2 = regval = (sspi->ctrl_freq / (2 * hz)) - 1;
if (regval > 0xFFFF || regval < 0) { if (regval > 0xFFFF || regval < 0) {
dev_err(&spi->dev, "Speed %d not supported\n", hz); dev_err(&spi->dev, "Speed %d not supported\n", hz);
return -EINVAL; return -EINVAL;
} }
switch (bits_per_word) { switch (bits_per_word) {
case 8: case 8:
regval |= SIRFSOC_SPI_TRAN_DAT_FORMAT_8; regval |= SIRFSOC_SPI_TRAN_DAT_FORMAT_8;
...@@ -559,94 +905,177 @@ spi_sirfsoc_setup_transfer(struct spi_device *spi, struct spi_transfer *t) ...@@ -559,94 +905,177 @@ spi_sirfsoc_setup_transfer(struct spi_device *spi, struct spi_transfer *t)
sspi->tx_word = spi_sirfsoc_tx_word_u32; sspi->tx_word = spi_sirfsoc_tx_word_u32;
break; break;
default: default:
BUG(); dev_err(&spi->dev, "bpw %d not supported\n", bits_per_word);
return -EINVAL;
} }
sspi->word_width = DIV_ROUND_UP(bits_per_word, 8); sspi->word_width = DIV_ROUND_UP(bits_per_word, 8);
txfifo_ctrl = SIRFSOC_SPI_FIFO_THD(SIRFSOC_SPI_FIFO_SIZE / 2) | txfifo_ctrl = (((sspi->fifo_size / 2) &
(sspi->word_width >> 1); SIRFSOC_SPI_FIFO_THD_MASK(sspi))
rxfifo_ctrl = SIRFSOC_SPI_FIFO_THD(SIRFSOC_SPI_FIFO_SIZE / 2) | << SIRFSOC_SPI_FIFO_THD_OFFSET) |
(sspi->word_width >> 1); (sspi->word_width >> 1);
rxfifo_ctrl = (((sspi->fifo_size / 2) &
if (!(spi->mode & SPI_CS_HIGH)) SIRFSOC_SPI_FIFO_THD_MASK(sspi))
regval |= SIRFSOC_SPI_CS_IDLE_STAT; << SIRFSOC_SPI_FIFO_THD_OFFSET) |
if (!(spi->mode & SPI_LSB_FIRST)) (sspi->word_width >> 1);
regval |= SIRFSOC_SPI_TRAN_MSB; writel(txfifo_ctrl, sspi->base + sspi->regs->txfifo_ctrl);
if (spi->mode & SPI_CPOL) writel(rxfifo_ctrl, sspi->base + sspi->regs->rxfifo_ctrl);
regval |= SIRFSOC_SPI_CLK_IDLE_STAT; if (sspi->type == SIRF_USP_SPI_P2 ||
sspi->type == SIRF_USP_SPI_A7) {
/* tx_frm_ctl = 0;
* Data should be driven at least 1/2 cycle before the fetch edge tx_frm_ctl |= ((bits_per_word - 1) & SIRFSOC_USP_TX_DATA_MASK)
* to make sure that data gets stable at the fetch edge. << SIRFSOC_USP_TX_DATA_OFFSET;
*/ tx_frm_ctl |= ((bits_per_word + 1 + SIRFSOC_USP_TXD_DELAY_LEN
if (((spi->mode & SPI_CPOL) && (spi->mode & SPI_CPHA)) || - 1) & SIRFSOC_USP_TX_SYNC_MASK) <<
(!(spi->mode & SPI_CPOL) && !(spi->mode & SPI_CPHA))) SIRFSOC_USP_TX_SYNC_OFFSET;
regval &= ~SIRFSOC_SPI_DRV_POS_EDGE; tx_frm_ctl |= ((bits_per_word + 1 + SIRFSOC_USP_TXD_DELAY_LEN
else + 2 - 1) & SIRFSOC_USP_TX_FRAME_MASK) <<
regval |= SIRFSOC_SPI_DRV_POS_EDGE; SIRFSOC_USP_TX_FRAME_OFFSET;
tx_frm_ctl |= ((bits_per_word - 1) &
writel(SIRFSOC_SPI_FIFO_SC(fifo_size - 2) | SIRFSOC_USP_TX_SHIFTER_MASK) <<
SIRFSOC_SPI_FIFO_LC(fifo_size / 2) | SIRFSOC_USP_TX_SHIFTER_OFFSET;
SIRFSOC_SPI_FIFO_HC(2), rx_frm_ctl = 0;
sspi->base + SIRFSOC_SPI_TXFIFO_LEVEL_CHK); rx_frm_ctl |= ((bits_per_word - 1) & SIRFSOC_USP_RX_DATA_MASK)
writel(SIRFSOC_SPI_FIFO_SC(2) | << SIRFSOC_USP_RX_DATA_OFFSET;
SIRFSOC_SPI_FIFO_LC(fifo_size / 2) | rx_frm_ctl |= ((bits_per_word + 1 + SIRFSOC_USP_RXD_DELAY_LEN
SIRFSOC_SPI_FIFO_HC(fifo_size - 2), + 2 - 1) & SIRFSOC_USP_RX_FRAME_MASK) <<
sspi->base + SIRFSOC_SPI_RXFIFO_LEVEL_CHK); SIRFSOC_USP_RX_FRAME_OFFSET;
writel(txfifo_ctrl, sspi->base + SIRFSOC_SPI_TXFIFO_CTRL); rx_frm_ctl |= ((bits_per_word - 1)
writel(rxfifo_ctrl, sspi->base + SIRFSOC_SPI_RXFIFO_CTRL); & SIRFSOC_USP_RX_SHIFTER_MASK) <<
SIRFSOC_USP_RX_SHIFTER_OFFSET;
if (t && t->tx_buf && !t->rx_buf && (t->len <= SIRFSOC_MAX_CMD_BYTES)) { writel(tx_frm_ctl | (((usp_mode2 >> 10) &
regval |= (SIRFSOC_SPI_CMD_BYTE_NUM((t->len - 1)) | SIRFSOC_USP_CLK_10_11_MASK) <<
SIRFSOC_SPI_CMD_MODE); SIRFSOC_USP_CLK_10_11_OFFSET),
sspi->tx_by_cmd = true; sspi->base + sspi->regs->usp_tx_frame_ctrl);
} else { writel(rx_frm_ctl | (((usp_mode2 >> 12) &
regval &= ~SIRFSOC_SPI_CMD_MODE; SIRFSOC_USP_CLK_12_15_MASK) <<
sspi->tx_by_cmd = false; SIRFSOC_USP_CLK_12_15_OFFSET),
sspi->base + sspi->regs->usp_rx_frame_ctrl);
writel(readl(sspi->base + sspi->regs->usp_mode2) |
((usp_mode2 & SIRFSOC_USP_CLK_DIVISOR_MASK) <<
SIRFSOC_USP_CLK_DIVISOR_OFFSET) |
(SIRFSOC_USP_RXD_DELAY_LEN <<
SIRFSOC_USP_RXD_DELAY_OFFSET) |
(SIRFSOC_USP_TXD_DELAY_LEN <<
SIRFSOC_USP_TXD_DELAY_OFFSET),
sspi->base + sspi->regs->usp_mode2);
}
if (sspi->type == SIRF_REAL_SPI)
writel(regval, sspi->base + sspi->regs->spi_ctrl);
spi_sirfsoc_config_mode(spi);
if (sspi->type == SIRF_REAL_SPI) {
if (t && t->tx_buf && !t->rx_buf &&
(t->len <= SIRFSOC_MAX_CMD_BYTES)) {
sspi->tx_by_cmd = true;
writel(readl(sspi->base + sspi->regs->spi_ctrl) |
(SIRFSOC_SPI_CMD_BYTE_NUM((t->len - 1)) |
SIRFSOC_SPI_CMD_MODE),
sspi->base + sspi->regs->spi_ctrl);
} else {
sspi->tx_by_cmd = false;
writel(readl(sspi->base + sspi->regs->spi_ctrl) &
~SIRFSOC_SPI_CMD_MODE,
sspi->base + sspi->regs->spi_ctrl);
}
} }
/*
* it should never set to hardware cs mode because in hardware cs mode,
* cs signal can't controlled by driver.
*/
regval |= SIRFSOC_SPI_CS_IO_MODE;
writel(regval, sspi->base + SIRFSOC_SPI_CTRL);
if (IS_DMA_VALID(t)) { if (IS_DMA_VALID(t)) {
/* Enable DMA mode for RX, TX */ /* Enable DMA mode for RX, TX */
writel(0, sspi->base + SIRFSOC_SPI_TX_DMA_IO_CTRL); writel(0, sspi->base + sspi->regs->tx_dma_io_ctrl);
writel(SIRFSOC_SPI_RX_DMA_FLUSH, writel(SIRFSOC_SPI_RX_DMA_FLUSH,
sspi->base + SIRFSOC_SPI_RX_DMA_IO_CTRL); sspi->base + sspi->regs->rx_dma_io_ctrl);
} else { } else {
/* Enable IO mode for RX, TX */ /* Enable IO mode for RX, TX */
writel(SIRFSOC_SPI_IO_MODE_SEL, writel(SIRFSOC_SPI_IO_MODE_SEL,
sspi->base + SIRFSOC_SPI_TX_DMA_IO_CTRL); sspi->base + sspi->regs->tx_dma_io_ctrl);
writel(SIRFSOC_SPI_IO_MODE_SEL, writel(SIRFSOC_SPI_IO_MODE_SEL,
sspi->base + SIRFSOC_SPI_RX_DMA_IO_CTRL); sspi->base + sspi->regs->rx_dma_io_ctrl);
} }
return 0; return 0;
} }
static int spi_sirfsoc_setup(struct spi_device *spi) static int spi_sirfsoc_setup(struct spi_device *spi)
{ {
struct sirfsoc_spi *sspi; struct sirfsoc_spi *sspi;
int ret = 0;
sspi = spi_master_get_devdata(spi->master); sspi = spi_master_get_devdata(spi->master);
if (spi->cs_gpio == -ENOENT) if (spi->cs_gpio == -ENOENT)
sspi->hw_cs = true; sspi->hw_cs = true;
else else {
sspi->hw_cs = false; sspi->hw_cs = false;
return spi_sirfsoc_setup_transfer(spi, NULL); if (!spi_get_ctldata(spi)) {
void *cs = kmalloc(sizeof(int), GFP_KERNEL);
if (!cs) {
ret = -ENOMEM;
goto exit;
}
ret = gpio_is_valid(spi->cs_gpio);
if (!ret) {
dev_err(&spi->dev, "no valid gpio\n");
ret = -ENOENT;
goto exit;
}
ret = gpio_request(spi->cs_gpio, DRIVER_NAME);
if (ret) {
dev_err(&spi->dev, "failed to request gpio\n");
goto exit;
}
spi_set_ctldata(spi, cs);
}
}
spi_sirfsoc_config_mode(spi);
spi_sirfsoc_chipselect(spi, BITBANG_CS_INACTIVE);
exit:
return ret;
}
static void spi_sirfsoc_cleanup(struct spi_device *spi)
{
if (spi_get_ctldata(spi)) {
gpio_free(spi->cs_gpio);
kfree(spi_get_ctldata(spi));
}
} }
static const struct sirf_spi_comp_data sirf_real_spi = {
.regs = &real_spi_register,
.type = SIRF_REAL_SPI,
.dat_max_frm_len = 64 * 1024,
.fifo_size = 256,
};
static const struct sirf_spi_comp_data sirf_usp_spi_p2 = {
.regs = &usp_spi_register,
.type = SIRF_USP_SPI_P2,
.dat_max_frm_len = 1024 * 1024,
.fifo_size = 128,
.hwinit = sirfsoc_usp_hwinit,
};
static const struct sirf_spi_comp_data sirf_usp_spi_a7 = {
.regs = &usp_spi_register,
.type = SIRF_USP_SPI_A7,
.dat_max_frm_len = 1024 * 1024,
.fifo_size = 512,
.hwinit = sirfsoc_usp_hwinit,
};
static const struct of_device_id spi_sirfsoc_of_match[] = {
{ .compatible = "sirf,prima2-spi", .data = &sirf_real_spi},
{ .compatible = "sirf,prima2-usp-spi", .data = &sirf_usp_spi_p2},
{ .compatible = "sirf,atlas7-usp-spi", .data = &sirf_usp_spi_a7},
{}
};
MODULE_DEVICE_TABLE(of, spi_sirfsoc_of_match);
static int spi_sirfsoc_probe(struct platform_device *pdev) static int spi_sirfsoc_probe(struct platform_device *pdev)
{ {
struct sirfsoc_spi *sspi; struct sirfsoc_spi *sspi;
struct spi_master *master; struct spi_master *master;
struct resource *mem_res; struct resource *mem_res;
struct sirf_spi_comp_data *spi_comp_data;
int irq; int irq;
int i, ret; int ret;
const struct of_device_id *match;
ret = device_reset(&pdev->dev); ret = device_reset(&pdev->dev);
if (ret) { if (ret) {
...@@ -659,16 +1088,22 @@ static int spi_sirfsoc_probe(struct platform_device *pdev) ...@@ -659,16 +1088,22 @@ static int spi_sirfsoc_probe(struct platform_device *pdev)
dev_err(&pdev->dev, "Unable to allocate SPI master\n"); dev_err(&pdev->dev, "Unable to allocate SPI master\n");
return -ENOMEM; return -ENOMEM;
} }
match = of_match_node(spi_sirfsoc_of_match, pdev->dev.of_node);
platform_set_drvdata(pdev, master); platform_set_drvdata(pdev, master);
sspi = spi_master_get_devdata(master); sspi = spi_master_get_devdata(master);
sspi->fifo_full_offset = ilog2(sspi->fifo_size);
spi_comp_data = (struct sirf_spi_comp_data *)match->data;
sspi->regs = spi_comp_data->regs;
sspi->type = spi_comp_data->type;
sspi->fifo_level_chk_mask = (sspi->fifo_size / 4) - 1;
sspi->dat_max_frm_len = spi_comp_data->dat_max_frm_len;
sspi->fifo_size = spi_comp_data->fifo_size;
mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0); mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
sspi->base = devm_ioremap_resource(&pdev->dev, mem_res); sspi->base = devm_ioremap_resource(&pdev->dev, mem_res);
if (IS_ERR(sspi->base)) { if (IS_ERR(sspi->base)) {
ret = PTR_ERR(sspi->base); ret = PTR_ERR(sspi->base);
goto free_master; goto free_master;
} }
irq = platform_get_irq(pdev, 0); irq = platform_get_irq(pdev, 0);
if (irq < 0) { if (irq < 0) {
ret = -ENXIO; ret = -ENXIO;
...@@ -684,11 +1119,13 @@ static int spi_sirfsoc_probe(struct platform_device *pdev) ...@@ -684,11 +1119,13 @@ static int spi_sirfsoc_probe(struct platform_device *pdev)
sspi->bitbang.setup_transfer = spi_sirfsoc_setup_transfer; sspi->bitbang.setup_transfer = spi_sirfsoc_setup_transfer;
sspi->bitbang.txrx_bufs = spi_sirfsoc_transfer; sspi->bitbang.txrx_bufs = spi_sirfsoc_transfer;
sspi->bitbang.master->setup = spi_sirfsoc_setup; sspi->bitbang.master->setup = spi_sirfsoc_setup;
sspi->bitbang.master->cleanup = spi_sirfsoc_cleanup;
master->bus_num = pdev->id; master->bus_num = pdev->id;
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_LSB_FIRST | SPI_CS_HIGH; master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_LSB_FIRST | SPI_CS_HIGH;
master->bits_per_word_mask = SPI_BPW_MASK(8) | SPI_BPW_MASK(12) | master->bits_per_word_mask = SPI_BPW_MASK(8) | SPI_BPW_MASK(12) |
SPI_BPW_MASK(16) | SPI_BPW_MASK(32); SPI_BPW_MASK(16) | SPI_BPW_MASK(32);
master->max_speed_hz = SIRFSOC_SPI_DEFAULT_FRQ; master->max_speed_hz = SIRFSOC_SPI_DEFAULT_FRQ;
master->flags = SPI_MASTER_MUST_RX | SPI_MASTER_MUST_TX;
sspi->bitbang.master->dev.of_node = pdev->dev.of_node; sspi->bitbang.master->dev.of_node = pdev->dev.of_node;
/* request DMA channels */ /* request DMA channels */
...@@ -711,47 +1148,19 @@ static int spi_sirfsoc_probe(struct platform_device *pdev) ...@@ -711,47 +1148,19 @@ static int spi_sirfsoc_probe(struct platform_device *pdev)
goto free_tx_dma; goto free_tx_dma;
} }
clk_prepare_enable(sspi->clk); clk_prepare_enable(sspi->clk);
if (spi_comp_data->hwinit)
spi_comp_data->hwinit(sspi);
sspi->ctrl_freq = clk_get_rate(sspi->clk); sspi->ctrl_freq = clk_get_rate(sspi->clk);
init_completion(&sspi->rx_done); init_completion(&sspi->rx_done);
init_completion(&sspi->tx_done); init_completion(&sspi->tx_done);
writel(SIRFSOC_SPI_FIFO_RESET, sspi->base + SIRFSOC_SPI_RXFIFO_OP);
writel(SIRFSOC_SPI_FIFO_RESET, sspi->base + SIRFSOC_SPI_TXFIFO_OP);
writel(SIRFSOC_SPI_FIFO_START, sspi->base + SIRFSOC_SPI_RXFIFO_OP);
writel(SIRFSOC_SPI_FIFO_START, sspi->base + SIRFSOC_SPI_TXFIFO_OP);
/* We are not using dummy delay between command and data */
writel(0, sspi->base + SIRFSOC_SPI_DUMMY_DELAY_CTL);
sspi->dummypage = kmalloc(2 * PAGE_SIZE, GFP_KERNEL);
if (!sspi->dummypage) {
ret = -ENOMEM;
goto free_clk;
}
ret = spi_bitbang_start(&sspi->bitbang); ret = spi_bitbang_start(&sspi->bitbang);
if (ret) if (ret)
goto free_dummypage; goto free_clk;
for (i = 0; master->cs_gpios && i < master->num_chipselect; i++) {
if (master->cs_gpios[i] == -ENOENT)
continue;
if (!gpio_is_valid(master->cs_gpios[i])) {
dev_err(&pdev->dev, "no valid gpio\n");
ret = -EINVAL;
goto free_dummypage;
}
ret = devm_gpio_request(&pdev->dev,
master->cs_gpios[i], DRIVER_NAME);
if (ret) {
dev_err(&pdev->dev, "failed to request gpio\n");
goto free_dummypage;
}
}
dev_info(&pdev->dev, "registerred, bus number = %d\n", master->bus_num); dev_info(&pdev->dev, "registerred, bus number = %d\n", master->bus_num);
return 0; return 0;
free_dummypage:
kfree(sspi->dummypage);
free_clk: free_clk:
clk_disable_unprepare(sspi->clk); clk_disable_unprepare(sspi->clk);
clk_put(sspi->clk); clk_put(sspi->clk);
...@@ -772,9 +1181,7 @@ static int spi_sirfsoc_remove(struct platform_device *pdev) ...@@ -772,9 +1181,7 @@ static int spi_sirfsoc_remove(struct platform_device *pdev)
master = platform_get_drvdata(pdev); master = platform_get_drvdata(pdev);
sspi = spi_master_get_devdata(master); sspi = spi_master_get_devdata(master);
spi_bitbang_stop(&sspi->bitbang); spi_bitbang_stop(&sspi->bitbang);
kfree(sspi->dummypage);
clk_disable_unprepare(sspi->clk); clk_disable_unprepare(sspi->clk);
clk_put(sspi->clk); clk_put(sspi->clk);
dma_release_channel(sspi->rx_chan); dma_release_channel(sspi->rx_chan);
...@@ -804,24 +1211,17 @@ static int spi_sirfsoc_resume(struct device *dev) ...@@ -804,24 +1211,17 @@ static int spi_sirfsoc_resume(struct device *dev)
struct sirfsoc_spi *sspi = spi_master_get_devdata(master); struct sirfsoc_spi *sspi = spi_master_get_devdata(master);
clk_enable(sspi->clk); clk_enable(sspi->clk);
writel(SIRFSOC_SPI_FIFO_RESET, sspi->base + SIRFSOC_SPI_RXFIFO_OP); writel(SIRFSOC_SPI_FIFO_RESET, sspi->base + sspi->regs->txfifo_op);
writel(SIRFSOC_SPI_FIFO_RESET, sspi->base + SIRFSOC_SPI_TXFIFO_OP); writel(SIRFSOC_SPI_FIFO_RESET, sspi->base + sspi->regs->rxfifo_op);
writel(SIRFSOC_SPI_FIFO_START, sspi->base + SIRFSOC_SPI_RXFIFO_OP); writel(SIRFSOC_SPI_FIFO_START, sspi->base + sspi->regs->txfifo_op);
writel(SIRFSOC_SPI_FIFO_START, sspi->base + SIRFSOC_SPI_TXFIFO_OP); writel(SIRFSOC_SPI_FIFO_START, sspi->base + sspi->regs->rxfifo_op);
return 0;
return spi_master_resume(master);
} }
#endif #endif
static SIMPLE_DEV_PM_OPS(spi_sirfsoc_pm_ops, spi_sirfsoc_suspend, static SIMPLE_DEV_PM_OPS(spi_sirfsoc_pm_ops, spi_sirfsoc_suspend,
spi_sirfsoc_resume); spi_sirfsoc_resume);
static const struct of_device_id spi_sirfsoc_of_match[] = {
{ .compatible = "sirf,prima2-spi", },
{}
};
MODULE_DEVICE_TABLE(of, spi_sirfsoc_of_match);
static struct platform_driver spi_sirfsoc_driver = { static struct platform_driver spi_sirfsoc_driver = {
.driver = { .driver = {
.name = DRIVER_NAME, .name = DRIVER_NAME,
...@@ -835,4 +1235,5 @@ module_platform_driver(spi_sirfsoc_driver); ...@@ -835,4 +1235,5 @@ module_platform_driver(spi_sirfsoc_driver);
MODULE_DESCRIPTION("SiRF SoC SPI master driver"); MODULE_DESCRIPTION("SiRF SoC SPI master driver");
MODULE_AUTHOR("Zhiwu Song <Zhiwu.Song@csr.com>"); MODULE_AUTHOR("Zhiwu Song <Zhiwu.Song@csr.com>");
MODULE_AUTHOR("Barry Song <Baohua.Song@csr.com>"); MODULE_AUTHOR("Barry Song <Baohua.Song@csr.com>");
MODULE_AUTHOR("Qipan Li <Qipan.Li@csr.com>");
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
/*
* Xilinx Zynq UltraScale+ MPSoC Quad-SPI (QSPI) controller driver
* (master mode only)
*
* Copyright (C) 2009 - 2015 Xilinx, Inc.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/of_irq.h>
#include <linux/of_address.h>
#include <linux/platform_device.h>
#include <linux/spi/spi.h>
#include <linux/spinlock.h>
#include <linux/workqueue.h>
/* Generic QSPI register offsets */
#define GQSPI_CONFIG_OFST 0x00000100
#define GQSPI_ISR_OFST 0x00000104
#define GQSPI_IDR_OFST 0x0000010C
#define GQSPI_IER_OFST 0x00000108
#define GQSPI_IMASK_OFST 0x00000110
#define GQSPI_EN_OFST 0x00000114
#define GQSPI_TXD_OFST 0x0000011C
#define GQSPI_RXD_OFST 0x00000120
#define GQSPI_TX_THRESHOLD_OFST 0x00000128
#define GQSPI_RX_THRESHOLD_OFST 0x0000012C
#define GQSPI_LPBK_DLY_ADJ_OFST 0x00000138
#define GQSPI_GEN_FIFO_OFST 0x00000140
#define GQSPI_SEL_OFST 0x00000144
#define GQSPI_GF_THRESHOLD_OFST 0x00000150
#define GQSPI_FIFO_CTRL_OFST 0x0000014C
#define GQSPI_QSPIDMA_DST_CTRL_OFST 0x0000080C
#define GQSPI_QSPIDMA_DST_SIZE_OFST 0x00000804
#define GQSPI_QSPIDMA_DST_STS_OFST 0x00000808
#define GQSPI_QSPIDMA_DST_I_STS_OFST 0x00000814
#define GQSPI_QSPIDMA_DST_I_EN_OFST 0x00000818
#define GQSPI_QSPIDMA_DST_I_DIS_OFST 0x0000081C
#define GQSPI_QSPIDMA_DST_I_MASK_OFST 0x00000820
#define GQSPI_QSPIDMA_DST_ADDR_OFST 0x00000800
#define GQSPI_QSPIDMA_DST_ADDR_MSB_OFST 0x00000828
/* GQSPI register bit masks */
#define GQSPI_SEL_MASK 0x00000001
#define GQSPI_EN_MASK 0x00000001
#define GQSPI_LPBK_DLY_ADJ_USE_LPBK_MASK 0x00000020
#define GQSPI_ISR_WR_TO_CLR_MASK 0x00000002
#define GQSPI_IDR_ALL_MASK 0x00000FBE
#define GQSPI_CFG_MODE_EN_MASK 0xC0000000
#define GQSPI_CFG_GEN_FIFO_START_MODE_MASK 0x20000000
#define GQSPI_CFG_ENDIAN_MASK 0x04000000
#define GQSPI_CFG_EN_POLL_TO_MASK 0x00100000
#define GQSPI_CFG_WP_HOLD_MASK 0x00080000
#define GQSPI_CFG_BAUD_RATE_DIV_MASK 0x00000038
#define GQSPI_CFG_CLK_PHA_MASK 0x00000004
#define GQSPI_CFG_CLK_POL_MASK 0x00000002
#define GQSPI_CFG_START_GEN_FIFO_MASK 0x10000000
#define GQSPI_GENFIFO_IMM_DATA_MASK 0x000000FF
#define GQSPI_GENFIFO_DATA_XFER 0x00000100
#define GQSPI_GENFIFO_EXP 0x00000200
#define GQSPI_GENFIFO_MODE_SPI 0x00000400
#define GQSPI_GENFIFO_MODE_DUALSPI 0x00000800
#define GQSPI_GENFIFO_MODE_QUADSPI 0x00000C00
#define GQSPI_GENFIFO_MODE_MASK 0x00000C00
#define GQSPI_GENFIFO_CS_LOWER 0x00001000
#define GQSPI_GENFIFO_CS_UPPER 0x00002000
#define GQSPI_GENFIFO_BUS_LOWER 0x00004000
#define GQSPI_GENFIFO_BUS_UPPER 0x00008000
#define GQSPI_GENFIFO_BUS_BOTH 0x0000C000
#define GQSPI_GENFIFO_BUS_MASK 0x0000C000
#define GQSPI_GENFIFO_TX 0x00010000
#define GQSPI_GENFIFO_RX 0x00020000
#define GQSPI_GENFIFO_STRIPE 0x00040000
#define GQSPI_GENFIFO_POLL 0x00080000
#define GQSPI_GENFIFO_EXP_START 0x00000100
#define GQSPI_FIFO_CTRL_RST_RX_FIFO_MASK 0x00000004
#define GQSPI_FIFO_CTRL_RST_TX_FIFO_MASK 0x00000002
#define GQSPI_FIFO_CTRL_RST_GEN_FIFO_MASK 0x00000001
#define GQSPI_ISR_RXEMPTY_MASK 0x00000800
#define GQSPI_ISR_GENFIFOFULL_MASK 0x00000400
#define GQSPI_ISR_GENFIFONOT_FULL_MASK 0x00000200
#define GQSPI_ISR_TXEMPTY_MASK 0x00000100
#define GQSPI_ISR_GENFIFOEMPTY_MASK 0x00000080
#define GQSPI_ISR_RXFULL_MASK 0x00000020
#define GQSPI_ISR_RXNEMPTY_MASK 0x00000010
#define GQSPI_ISR_TXFULL_MASK 0x00000008
#define GQSPI_ISR_TXNOT_FULL_MASK 0x00000004
#define GQSPI_ISR_POLL_TIME_EXPIRE_MASK 0x00000002
#define GQSPI_IER_TXNOT_FULL_MASK 0x00000004
#define GQSPI_IER_RXEMPTY_MASK 0x00000800
#define GQSPI_IER_POLL_TIME_EXPIRE_MASK 0x00000002
#define GQSPI_IER_RXNEMPTY_MASK 0x00000010
#define GQSPI_IER_GENFIFOEMPTY_MASK 0x00000080
#define GQSPI_IER_TXEMPTY_MASK 0x00000100
#define GQSPI_QSPIDMA_DST_INTR_ALL_MASK 0x000000FE
#define GQSPI_QSPIDMA_DST_STS_WTC 0x0000E000
#define GQSPI_CFG_MODE_EN_DMA_MASK 0x80000000
#define GQSPI_ISR_IDR_MASK 0x00000994
#define GQSPI_QSPIDMA_DST_I_EN_DONE_MASK 0x00000002
#define GQSPI_QSPIDMA_DST_I_STS_DONE_MASK 0x00000002
#define GQSPI_IRQ_MASK 0x00000980
#define GQSPI_CFG_BAUD_RATE_DIV_SHIFT 3
#define GQSPI_GENFIFO_CS_SETUP 0x4
#define GQSPI_GENFIFO_CS_HOLD 0x3
#define GQSPI_TXD_DEPTH 64
#define GQSPI_RX_FIFO_THRESHOLD 32
#define GQSPI_RX_FIFO_FILL (GQSPI_RX_FIFO_THRESHOLD * 4)
#define GQSPI_TX_FIFO_THRESHOLD_RESET_VAL 32
#define GQSPI_TX_FIFO_FILL (GQSPI_TXD_DEPTH -\
GQSPI_TX_FIFO_THRESHOLD_RESET_VAL)
#define GQSPI_GEN_FIFO_THRESHOLD_RESET_VAL 0X10
#define GQSPI_QSPIDMA_DST_CTRL_RESET_VAL 0x803FFA00
#define GQSPI_SELECT_FLASH_CS_LOWER 0x1
#define GQSPI_SELECT_FLASH_CS_UPPER 0x2
#define GQSPI_SELECT_FLASH_CS_BOTH 0x3
#define GQSPI_SELECT_FLASH_BUS_LOWER 0x1
#define GQSPI_SELECT_FLASH_BUS_UPPER 0x2
#define GQSPI_SELECT_FLASH_BUS_BOTH 0x3
#define GQSPI_BAUD_DIV_MAX 7 /* Baud rate divisor maximum */
#define GQSPI_BAUD_DIV_SHIFT 2 /* Baud rate divisor shift */
#define GQSPI_SELECT_MODE_SPI 0x1
#define GQSPI_SELECT_MODE_DUALSPI 0x2
#define GQSPI_SELECT_MODE_QUADSPI 0x4
#define GQSPI_DMA_UNALIGN 0x3
#define GQSPI_DEFAULT_NUM_CS 1 /* Default number of chip selects */
enum mode_type {GQSPI_MODE_IO, GQSPI_MODE_DMA};
/**
* struct zynqmp_qspi - Defines qspi driver instance
* @regs: Virtual address of the QSPI controller registers
* @refclk: Pointer to the peripheral clock
* @pclk: Pointer to the APB clock
* @irq: IRQ number
* @dev: Pointer to struct device
* @txbuf: Pointer to the TX buffer
* @rxbuf: Pointer to the RX buffer
* @bytes_to_transfer: Number of bytes left to transfer
* @bytes_to_receive: Number of bytes left to receive
* @genfifocs: Used for chip select
* @genfifobus: Used to select the upper or lower bus
* @dma_rx_bytes: Remaining bytes to receive by DMA mode
* @dma_addr: DMA address after mapping the kernel buffer
* @genfifoentry: Used for storing the genfifoentry instruction.
* @mode: Defines the mode in which QSPI is operating
*/
struct zynqmp_qspi {
void __iomem *regs;
struct clk *refclk;
struct clk *pclk;
int irq;
struct device *dev;
const void *txbuf;
void *rxbuf;
int bytes_to_transfer;
int bytes_to_receive;
u32 genfifocs;
u32 genfifobus;
u32 dma_rx_bytes;
dma_addr_t dma_addr;
u32 genfifoentry;
enum mode_type mode;
};
/**
* zynqmp_gqspi_read: For GQSPI controller read operation
* @xqspi: Pointer to the zynqmp_qspi structure
* @offset: Offset from where to read
*/
static u32 zynqmp_gqspi_read(struct zynqmp_qspi *xqspi, u32 offset)
{
return readl_relaxed(xqspi->regs + offset);
}
/**
* zynqmp_gqspi_write: For GQSPI controller write operation
* @xqspi: Pointer to the zynqmp_qspi structure
* @offset: Offset where to write
* @val: Value to be written
*/
static inline void zynqmp_gqspi_write(struct zynqmp_qspi *xqspi, u32 offset,
u32 val)
{
writel_relaxed(val, (xqspi->regs + offset));
}
/**
* zynqmp_gqspi_selectslave: For selection of slave device
* @instanceptr: Pointer to the zynqmp_qspi structure
* @flashcs: For chip select
* @flashbus: To check which bus is selected- upper or lower
*/
static void zynqmp_gqspi_selectslave(struct zynqmp_qspi *instanceptr,
u8 slavecs, u8 slavebus)
{
/*
* Bus and CS lines selected here will be updated in the instance and
* used for subsequent GENFIFO entries during transfer.
*/
/* Choose slave select line */
switch (slavecs) {
case GQSPI_SELECT_FLASH_CS_BOTH:
instanceptr->genfifocs = GQSPI_GENFIFO_CS_LOWER |
GQSPI_GENFIFO_CS_UPPER;
case GQSPI_SELECT_FLASH_CS_UPPER:
instanceptr->genfifocs = GQSPI_GENFIFO_CS_UPPER;
break;
case GQSPI_SELECT_FLASH_CS_LOWER:
instanceptr->genfifocs = GQSPI_GENFIFO_CS_LOWER;
break;
default:
dev_warn(instanceptr->dev, "Invalid slave select\n");
}
/* Choose the bus */
switch (slavebus) {
case GQSPI_SELECT_FLASH_BUS_BOTH:
instanceptr->genfifobus = GQSPI_GENFIFO_BUS_LOWER |
GQSPI_GENFIFO_BUS_UPPER;
break;
case GQSPI_SELECT_FLASH_BUS_UPPER:
instanceptr->genfifobus = GQSPI_GENFIFO_BUS_UPPER;
break;
case GQSPI_SELECT_FLASH_BUS_LOWER:
instanceptr->genfifobus = GQSPI_GENFIFO_BUS_LOWER;
break;
default:
dev_warn(instanceptr->dev, "Invalid slave bus\n");
}
}
/**
* zynqmp_qspi_init_hw: Initialize the hardware
* @xqspi: Pointer to the zynqmp_qspi structure
*
* The default settings of the QSPI controller's configurable parameters on
* reset are
* - Master mode
* - TX threshold set to 1
* - RX threshold set to 1
* - Flash memory interface mode enabled
* This function performs the following actions
* - Disable and clear all the interrupts
* - Enable manual slave select
* - Enable manual start
* - Deselect all the chip select lines
* - Set the little endian mode of TX FIFO and
* - Enable the QSPI controller
*/
static void zynqmp_qspi_init_hw(struct zynqmp_qspi *xqspi)
{
u32 config_reg;
/* Select the GQSPI mode */
zynqmp_gqspi_write(xqspi, GQSPI_SEL_OFST, GQSPI_SEL_MASK);
/* Clear and disable interrupts */
zynqmp_gqspi_write(xqspi, GQSPI_ISR_OFST,
zynqmp_gqspi_read(xqspi, GQSPI_ISR_OFST) |
GQSPI_ISR_WR_TO_CLR_MASK);
/* Clear the DMA STS */
zynqmp_gqspi_write(xqspi, GQSPI_QSPIDMA_DST_I_STS_OFST,
zynqmp_gqspi_read(xqspi,
GQSPI_QSPIDMA_DST_I_STS_OFST));
zynqmp_gqspi_write(xqspi, GQSPI_QSPIDMA_DST_STS_OFST,
zynqmp_gqspi_read(xqspi,
GQSPI_QSPIDMA_DST_STS_OFST) |
GQSPI_QSPIDMA_DST_STS_WTC);
zynqmp_gqspi_write(xqspi, GQSPI_IDR_OFST, GQSPI_IDR_ALL_MASK);
zynqmp_gqspi_write(xqspi,
GQSPI_QSPIDMA_DST_I_DIS_OFST,
GQSPI_QSPIDMA_DST_INTR_ALL_MASK);
/* Disable the GQSPI */
zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, 0x0);
config_reg = zynqmp_gqspi_read(xqspi, GQSPI_CONFIG_OFST);
config_reg &= ~GQSPI_CFG_MODE_EN_MASK;
/* Manual start */
config_reg |= GQSPI_CFG_GEN_FIFO_START_MODE_MASK;
/* Little endian by default */
config_reg &= ~GQSPI_CFG_ENDIAN_MASK;
/* Disable poll time out */
config_reg &= ~GQSPI_CFG_EN_POLL_TO_MASK;
/* Set hold bit */
config_reg |= GQSPI_CFG_WP_HOLD_MASK;
/* Clear pre-scalar by default */
config_reg &= ~GQSPI_CFG_BAUD_RATE_DIV_MASK;
/* CPHA 0 */
config_reg &= ~GQSPI_CFG_CLK_PHA_MASK;
/* CPOL 0 */
config_reg &= ~GQSPI_CFG_CLK_POL_MASK;
zynqmp_gqspi_write(xqspi, GQSPI_CONFIG_OFST, config_reg);
/* Clear the TX and RX FIFO */
zynqmp_gqspi_write(xqspi, GQSPI_FIFO_CTRL_OFST,
GQSPI_FIFO_CTRL_RST_RX_FIFO_MASK |
GQSPI_FIFO_CTRL_RST_TX_FIFO_MASK |
GQSPI_FIFO_CTRL_RST_GEN_FIFO_MASK);
/* Set by default to allow for high frequencies */
zynqmp_gqspi_write(xqspi, GQSPI_LPBK_DLY_ADJ_OFST,
zynqmp_gqspi_read(xqspi, GQSPI_LPBK_DLY_ADJ_OFST) |
GQSPI_LPBK_DLY_ADJ_USE_LPBK_MASK);
/* Reset thresholds */
zynqmp_gqspi_write(xqspi, GQSPI_TX_THRESHOLD_OFST,
GQSPI_TX_FIFO_THRESHOLD_RESET_VAL);
zynqmp_gqspi_write(xqspi, GQSPI_RX_THRESHOLD_OFST,
GQSPI_RX_FIFO_THRESHOLD);
zynqmp_gqspi_write(xqspi, GQSPI_GF_THRESHOLD_OFST,
GQSPI_GEN_FIFO_THRESHOLD_RESET_VAL);
zynqmp_gqspi_selectslave(xqspi,
GQSPI_SELECT_FLASH_CS_LOWER,
GQSPI_SELECT_FLASH_BUS_LOWER);
/* Initialize DMA */
zynqmp_gqspi_write(xqspi,
GQSPI_QSPIDMA_DST_CTRL_OFST,
GQSPI_QSPIDMA_DST_CTRL_RESET_VAL);
/* Enable the GQSPI */
zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, GQSPI_EN_MASK);
}
/**
* zynqmp_qspi_copy_read_data: Copy data to RX buffer
* @xqspi: Pointer to the zynqmp_qspi structure
* @data: The variable where data is stored
* @size: Number of bytes to be copied from data to RX buffer
*/
static void zynqmp_qspi_copy_read_data(struct zynqmp_qspi *xqspi,
ulong data, u8 size)
{
memcpy(xqspi->rxbuf, &data, size);
xqspi->rxbuf += size;
xqspi->bytes_to_receive -= size;
}
/**
* zynqmp_prepare_transfer_hardware: Prepares hardware for transfer.
* @master: Pointer to the spi_master structure which provides
* information about the controller.
*
* This function enables SPI master controller.
*
* Return: 0 on success; error value otherwise
*/
static int zynqmp_prepare_transfer_hardware(struct spi_master *master)
{
struct zynqmp_qspi *xqspi = spi_master_get_devdata(master);
int ret;
ret = clk_enable(xqspi->refclk);
if (ret)
goto clk_err;
ret = clk_enable(xqspi->pclk);
if (ret)
goto clk_err;
zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, GQSPI_EN_MASK);
return 0;
clk_err:
return ret;
}
/**
* zynqmp_unprepare_transfer_hardware: Relaxes hardware after transfer
* @master: Pointer to the spi_master structure which provides
* information about the controller.
*
* This function disables the SPI master controller.
*
* Return: Always 0
*/
static int zynqmp_unprepare_transfer_hardware(struct spi_master *master)
{
struct zynqmp_qspi *xqspi = spi_master_get_devdata(master);
zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, 0x0);
clk_disable(xqspi->refclk);
clk_disable(xqspi->pclk);
return 0;
}
/**
* zynqmp_qspi_chipselect: Select or deselect the chip select line
* @qspi: Pointer to the spi_device structure
* @is_high: Select(0) or deselect (1) the chip select line
*/
static void zynqmp_qspi_chipselect(struct spi_device *qspi, bool is_high)
{
struct zynqmp_qspi *xqspi = spi_master_get_devdata(qspi->master);
ulong timeout;
u32 genfifoentry = 0x0, statusreg;
genfifoentry |= GQSPI_GENFIFO_MODE_SPI;
genfifoentry |= xqspi->genfifobus;
if (!is_high) {
genfifoentry |= xqspi->genfifocs;
genfifoentry |= GQSPI_GENFIFO_CS_SETUP;
} else {
genfifoentry |= GQSPI_GENFIFO_CS_HOLD;
}
zynqmp_gqspi_write(xqspi, GQSPI_GEN_FIFO_OFST, genfifoentry);
/* Dummy generic FIFO entry */
zynqmp_gqspi_write(xqspi, GQSPI_GEN_FIFO_OFST, 0x0);
/* Manually start the generic FIFO command */
zynqmp_gqspi_write(xqspi, GQSPI_CONFIG_OFST,
zynqmp_gqspi_read(xqspi, GQSPI_CONFIG_OFST) |
GQSPI_CFG_START_GEN_FIFO_MASK);
timeout = jiffies + msecs_to_jiffies(1000);
/* Wait until the generic FIFO command is empty */
do {
statusreg = zynqmp_gqspi_read(xqspi, GQSPI_ISR_OFST);
if ((statusreg & GQSPI_ISR_GENFIFOEMPTY_MASK) &&
(statusreg & GQSPI_ISR_TXEMPTY_MASK))
break;
else
cpu_relax();
} while (!time_after_eq(jiffies, timeout));
if (time_after_eq(jiffies, timeout))
dev_err(xqspi->dev, "Chip select timed out\n");
}
/**
* zynqmp_qspi_setup_transfer: Configure QSPI controller for specified
* transfer
* @qspi: Pointer to the spi_device structure
* @transfer: Pointer to the spi_transfer structure which provides
* information about next transfer setup parameters
*
* Sets the operational mode of QSPI controller for the next QSPI transfer and
* sets the requested clock frequency.
*
* Return: Always 0
*
* Note:
* If the requested frequency is not an exact match with what can be
* obtained using the pre-scalar value, the driver sets the clock
* frequency which is lower than the requested frequency (maximum lower)
* for the transfer.
*
* If the requested frequency is higher or lower than that is supported
* by the QSPI controller the driver will set the highest or lowest
* frequency supported by controller.
*/
static int zynqmp_qspi_setup_transfer(struct spi_device *qspi,
struct spi_transfer *transfer)
{
struct zynqmp_qspi *xqspi = spi_master_get_devdata(qspi->master);
ulong clk_rate;
u32 config_reg, req_hz, baud_rate_val = 0;
if (transfer)
req_hz = transfer->speed_hz;
else
req_hz = qspi->max_speed_hz;
/* Set the clock frequency */
/* If req_hz == 0, default to lowest speed */
clk_rate = clk_get_rate(xqspi->refclk);
while ((baud_rate_val < GQSPI_BAUD_DIV_MAX) &&
(clk_rate /
(GQSPI_BAUD_DIV_SHIFT << baud_rate_val)) > req_hz)
baud_rate_val++;
config_reg = zynqmp_gqspi_read(xqspi, GQSPI_CONFIG_OFST);
/* Set the QSPI clock phase and clock polarity */
config_reg &= (~GQSPI_CFG_CLK_PHA_MASK) & (~GQSPI_CFG_CLK_POL_MASK);
if (qspi->mode & SPI_CPHA)
config_reg |= GQSPI_CFG_CLK_PHA_MASK;
if (qspi->mode & SPI_CPOL)
config_reg |= GQSPI_CFG_CLK_POL_MASK;
config_reg &= ~GQSPI_CFG_BAUD_RATE_DIV_MASK;
config_reg |= (baud_rate_val << GQSPI_CFG_BAUD_RATE_DIV_SHIFT);
zynqmp_gqspi_write(xqspi, GQSPI_CONFIG_OFST, config_reg);
return 0;
}
/**
* zynqmp_qspi_setup: Configure the QSPI controller
* @qspi: Pointer to the spi_device structure
*
* Sets the operational mode of QSPI controller for the next QSPI transfer,
* baud rate and divisor value to setup the requested qspi clock.
*
* Return: 0 on success; error value otherwise.
*/
static int zynqmp_qspi_setup(struct spi_device *qspi)
{
if (qspi->master->busy)
return -EBUSY;
return 0;
}
/**
* zynqmp_qspi_filltxfifo: Fills the TX FIFO as long as there is room in
* the FIFO or the bytes required to be
* transmitted.
* @xqspi: Pointer to the zynqmp_qspi structure
* @size: Number of bytes to be copied from TX buffer to TX FIFO
*/
static void zynqmp_qspi_filltxfifo(struct zynqmp_qspi *xqspi, int size)
{
u32 count = 0, intermediate;
while ((xqspi->bytes_to_transfer > 0) && (count < size)) {
memcpy(&intermediate, xqspi->txbuf, 4);
zynqmp_gqspi_write(xqspi, GQSPI_TXD_OFST, intermediate);
if (xqspi->bytes_to_transfer >= 4) {
xqspi->txbuf += 4;
xqspi->bytes_to_transfer -= 4;
} else {
xqspi->txbuf += xqspi->bytes_to_transfer;
xqspi->bytes_to_transfer = 0;
}
count++;
}
}
/**
* zynqmp_qspi_readrxfifo: Fills the RX FIFO as long as there is room in
* the FIFO.
* @xqspi: Pointer to the zynqmp_qspi structure
* @size: Number of bytes to be copied from RX buffer to RX FIFO
*/
static void zynqmp_qspi_readrxfifo(struct zynqmp_qspi *xqspi, u32 size)
{
ulong data;
int count = 0;
while ((count < size) && (xqspi->bytes_to_receive > 0)) {
if (xqspi->bytes_to_receive >= 4) {
(*(u32 *) xqspi->rxbuf) =
zynqmp_gqspi_read(xqspi, GQSPI_RXD_OFST);
xqspi->rxbuf += 4;
xqspi->bytes_to_receive -= 4;
count += 4;
} else {
data = zynqmp_gqspi_read(xqspi, GQSPI_RXD_OFST);
count += xqspi->bytes_to_receive;
zynqmp_qspi_copy_read_data(xqspi, data,
xqspi->bytes_to_receive);
xqspi->bytes_to_receive = 0;
}
}
}
/**
* zynqmp_process_dma_irq: Handler for DMA done interrupt of QSPI
* controller
* @xqspi: zynqmp_qspi instance pointer
*
* This function handles DMA interrupt only.
*/
static void zynqmp_process_dma_irq(struct zynqmp_qspi *xqspi)
{
u32 config_reg, genfifoentry;
dma_unmap_single(xqspi->dev, xqspi->dma_addr,
xqspi->dma_rx_bytes, DMA_FROM_DEVICE);
xqspi->rxbuf += xqspi->dma_rx_bytes;
xqspi->bytes_to_receive -= xqspi->dma_rx_bytes;
xqspi->dma_rx_bytes = 0;
/* Disabling the DMA interrupts */
zynqmp_gqspi_write(xqspi, GQSPI_QSPIDMA_DST_I_DIS_OFST,
GQSPI_QSPIDMA_DST_I_EN_DONE_MASK);
if (xqspi->bytes_to_receive > 0) {
/* Switch to IO mode,for remaining bytes to receive */
config_reg = zynqmp_gqspi_read(xqspi, GQSPI_CONFIG_OFST);
config_reg &= ~GQSPI_CFG_MODE_EN_MASK;
zynqmp_gqspi_write(xqspi, GQSPI_CONFIG_OFST, config_reg);
/* Initiate the transfer of remaining bytes */
genfifoentry = xqspi->genfifoentry;
genfifoentry |= xqspi->bytes_to_receive;
zynqmp_gqspi_write(xqspi, GQSPI_GEN_FIFO_OFST, genfifoentry);
/* Dummy generic FIFO entry */
zynqmp_gqspi_write(xqspi, GQSPI_GEN_FIFO_OFST, 0x0);
/* Manual start */
zynqmp_gqspi_write(xqspi, GQSPI_CONFIG_OFST,
(zynqmp_gqspi_read(xqspi, GQSPI_CONFIG_OFST) |
GQSPI_CFG_START_GEN_FIFO_MASK));
/* Enable the RX interrupts for IO mode */
zynqmp_gqspi_write(xqspi, GQSPI_IER_OFST,
GQSPI_IER_GENFIFOEMPTY_MASK |
GQSPI_IER_RXNEMPTY_MASK |
GQSPI_IER_RXEMPTY_MASK);
}
}
/**
* zynqmp_qspi_irq: Interrupt service routine of the QSPI controller
* @irq: IRQ number
* @dev_id: Pointer to the xqspi structure
*
* This function handles TX empty only.
* On TX empty interrupt this function reads the received data from RX FIFO
* and fills the TX FIFO if there is any data remaining to be transferred.
*
* Return: IRQ_HANDLED when interrupt is handled
* IRQ_NONE otherwise.
*/
static irqreturn_t zynqmp_qspi_irq(int irq, void *dev_id)
{
struct spi_master *master = dev_id;
struct zynqmp_qspi *xqspi = spi_master_get_devdata(master);
int ret = IRQ_NONE;
u32 status, mask, dma_status = 0;
status = zynqmp_gqspi_read(xqspi, GQSPI_ISR_OFST);
zynqmp_gqspi_write(xqspi, GQSPI_ISR_OFST, status);
mask = (status & ~(zynqmp_gqspi_read(xqspi, GQSPI_IMASK_OFST)));
/* Read and clear DMA status */
if (xqspi->mode == GQSPI_MODE_DMA) {
dma_status =
zynqmp_gqspi_read(xqspi, GQSPI_QSPIDMA_DST_I_STS_OFST);
zynqmp_gqspi_write(xqspi, GQSPI_QSPIDMA_DST_I_STS_OFST,
dma_status);
}
if (mask & GQSPI_ISR_TXNOT_FULL_MASK) {
zynqmp_qspi_filltxfifo(xqspi, GQSPI_TX_FIFO_FILL);
ret = IRQ_HANDLED;
}
if (dma_status & GQSPI_QSPIDMA_DST_I_STS_DONE_MASK) {
zynqmp_process_dma_irq(xqspi);
ret = IRQ_HANDLED;
} else if (!(mask & GQSPI_IER_RXEMPTY_MASK) &&
(mask & GQSPI_IER_GENFIFOEMPTY_MASK)) {
zynqmp_qspi_readrxfifo(xqspi, GQSPI_RX_FIFO_FILL);
ret = IRQ_HANDLED;
}
if ((xqspi->bytes_to_receive == 0) && (xqspi->bytes_to_transfer == 0)
&& ((status & GQSPI_IRQ_MASK) == GQSPI_IRQ_MASK)) {
zynqmp_gqspi_write(xqspi, GQSPI_IDR_OFST, GQSPI_ISR_IDR_MASK);
spi_finalize_current_transfer(master);
ret = IRQ_HANDLED;
}
return ret;
}
/**
* zynqmp_qspi_selectspimode: Selects SPI mode - x1 or x2 or x4.
* @xqspi: xqspi is a pointer to the GQSPI instance
* @spimode: spimode - SPI or DUAL or QUAD.
* Return: Mask to set desired SPI mode in GENFIFO entry.
*/
static inline u32 zynqmp_qspi_selectspimode(struct zynqmp_qspi *xqspi,
u8 spimode)
{
u32 mask = 0;
switch (spimode) {
case GQSPI_SELECT_MODE_DUALSPI:
mask = GQSPI_GENFIFO_MODE_DUALSPI;
break;
case GQSPI_SELECT_MODE_QUADSPI:
mask = GQSPI_GENFIFO_MODE_QUADSPI;
break;
case GQSPI_SELECT_MODE_SPI:
mask = GQSPI_GENFIFO_MODE_SPI;
break;
default:
dev_warn(xqspi->dev, "Invalid SPI mode\n");
}
return mask;
}
/**
* zynq_qspi_setuprxdma: This function sets up the RX DMA operation
* @xqspi: xqspi is a pointer to the GQSPI instance.
*/
static void zynq_qspi_setuprxdma(struct zynqmp_qspi *xqspi)
{
u32 rx_bytes, rx_rem, config_reg;
dma_addr_t addr;
u64 dma_align = (u64)(uintptr_t)xqspi->rxbuf;
if ((xqspi->bytes_to_receive < 8) ||
((dma_align & GQSPI_DMA_UNALIGN) != 0x0)) {
/* Setting to IO mode */
config_reg = zynqmp_gqspi_read(xqspi, GQSPI_CONFIG_OFST);
config_reg &= ~GQSPI_CFG_MODE_EN_MASK;
zynqmp_gqspi_write(xqspi, GQSPI_CONFIG_OFST, config_reg);
xqspi->mode = GQSPI_MODE_IO;
xqspi->dma_rx_bytes = 0;
return;
}
rx_rem = xqspi->bytes_to_receive % 4;
rx_bytes = (xqspi->bytes_to_receive - rx_rem);
addr = dma_map_single(xqspi->dev, (void *)xqspi->rxbuf,
rx_bytes, DMA_FROM_DEVICE);
if (dma_mapping_error(xqspi->dev, addr))
dev_err(xqspi->dev, "ERR:rxdma:memory not mapped\n");
xqspi->dma_rx_bytes = rx_bytes;
xqspi->dma_addr = addr;
zynqmp_gqspi_write(xqspi, GQSPI_QSPIDMA_DST_ADDR_OFST,
(u32)(addr & 0xffffffff));
addr = ((addr >> 16) >> 16);
zynqmp_gqspi_write(xqspi, GQSPI_QSPIDMA_DST_ADDR_MSB_OFST,
((u32)addr) & 0xfff);
/* Enabling the DMA mode */
config_reg = zynqmp_gqspi_read(xqspi, GQSPI_CONFIG_OFST);
config_reg &= ~GQSPI_CFG_MODE_EN_MASK;
config_reg |= GQSPI_CFG_MODE_EN_DMA_MASK;
zynqmp_gqspi_write(xqspi, GQSPI_CONFIG_OFST, config_reg);
/* Switch to DMA mode */
xqspi->mode = GQSPI_MODE_DMA;
/* Write the number of bytes to transfer */
zynqmp_gqspi_write(xqspi, GQSPI_QSPIDMA_DST_SIZE_OFST, rx_bytes);
}
/**
* zynqmp_qspi_txrxsetup: This function checks the TX/RX buffers in
* the transfer and sets up the GENFIFO entries,
* TX FIFO as required.
* @xqspi: xqspi is a pointer to the GQSPI instance.
* @transfer: It is a pointer to the structure containing transfer data.
* @genfifoentry: genfifoentry is pointer to the variable in which
* GENFIFO mask is returned to calling function
*/
static void zynqmp_qspi_txrxsetup(struct zynqmp_qspi *xqspi,
struct spi_transfer *transfer,
u32 *genfifoentry)
{
u32 config_reg;
/* Transmit */
if ((xqspi->txbuf != NULL) && (xqspi->rxbuf == NULL)) {
/* Setup data to be TXed */
*genfifoentry &= ~GQSPI_GENFIFO_RX;
*genfifoentry |= GQSPI_GENFIFO_DATA_XFER;
*genfifoentry |= GQSPI_GENFIFO_TX;
*genfifoentry |=
zynqmp_qspi_selectspimode(xqspi, transfer->tx_nbits);
xqspi->bytes_to_transfer = transfer->len;
if (xqspi->mode == GQSPI_MODE_DMA) {
config_reg = zynqmp_gqspi_read(xqspi,
GQSPI_CONFIG_OFST);
config_reg &= ~GQSPI_CFG_MODE_EN_MASK;
zynqmp_gqspi_write(xqspi, GQSPI_CONFIG_OFST,
config_reg);
xqspi->mode = GQSPI_MODE_IO;
}
zynqmp_qspi_filltxfifo(xqspi, GQSPI_TXD_DEPTH);
/* Discard RX data */
xqspi->bytes_to_receive = 0;
} else if ((xqspi->txbuf == NULL) && (xqspi->rxbuf != NULL)) {
/* Receive */
/* TX auto fill */
*genfifoentry &= ~GQSPI_GENFIFO_TX;
/* Setup RX */
*genfifoentry |= GQSPI_GENFIFO_DATA_XFER;
*genfifoentry |= GQSPI_GENFIFO_RX;
*genfifoentry |=
zynqmp_qspi_selectspimode(xqspi, transfer->rx_nbits);
xqspi->bytes_to_transfer = 0;
xqspi->bytes_to_receive = transfer->len;
zynq_qspi_setuprxdma(xqspi);
}
}
/**
* zynqmp_qspi_start_transfer: Initiates the QSPI transfer
* @master: Pointer to the spi_master structure which provides
* information about the controller.
* @qspi: Pointer to the spi_device structure
* @transfer: Pointer to the spi_transfer structure which provide information
* about next transfer parameters
*
* This function fills the TX FIFO, starts the QSPI transfer, and waits for the
* transfer to be completed.
*
* Return: Number of bytes transferred in the last transfer
*/
static int zynqmp_qspi_start_transfer(struct spi_master *master,
struct spi_device *qspi,
struct spi_transfer *transfer)
{
struct zynqmp_qspi *xqspi = spi_master_get_devdata(master);
u32 genfifoentry = 0x0, transfer_len;
xqspi->txbuf = transfer->tx_buf;
xqspi->rxbuf = transfer->rx_buf;
zynqmp_qspi_setup_transfer(qspi, transfer);
genfifoentry |= xqspi->genfifocs;
genfifoentry |= xqspi->genfifobus;
zynqmp_qspi_txrxsetup(xqspi, transfer, &genfifoentry);
if (xqspi->mode == GQSPI_MODE_DMA)
transfer_len = xqspi->dma_rx_bytes;
else
transfer_len = transfer->len;
xqspi->genfifoentry = genfifoentry;
if ((transfer_len) < GQSPI_GENFIFO_IMM_DATA_MASK) {
genfifoentry &= ~GQSPI_GENFIFO_IMM_DATA_MASK;
genfifoentry |= transfer_len;
zynqmp_gqspi_write(xqspi, GQSPI_GEN_FIFO_OFST, genfifoentry);
} else {
int tempcount = transfer_len;
u32 exponent = 8; /* 2^8 = 256 */
u8 imm_data = tempcount & 0xFF;
tempcount &= ~(tempcount & 0xFF);
/* Immediate entry */
if (tempcount != 0) {
/* Exponent entries */
genfifoentry |= GQSPI_GENFIFO_EXP;
while (tempcount != 0) {
if (tempcount & GQSPI_GENFIFO_EXP_START) {
genfifoentry &=
~GQSPI_GENFIFO_IMM_DATA_MASK;
genfifoentry |= exponent;
zynqmp_gqspi_write(xqspi,
GQSPI_GEN_FIFO_OFST,
genfifoentry);
}
tempcount = tempcount >> 1;
exponent++;
}
}
if (imm_data != 0) {
genfifoentry &= ~GQSPI_GENFIFO_EXP;
genfifoentry &= ~GQSPI_GENFIFO_IMM_DATA_MASK;
genfifoentry |= (u8) (imm_data & 0xFF);
zynqmp_gqspi_write(xqspi,
GQSPI_GEN_FIFO_OFST, genfifoentry);
}
}
if ((xqspi->mode == GQSPI_MODE_IO) &&
(xqspi->rxbuf != NULL)) {
/* Dummy generic FIFO entry */
zynqmp_gqspi_write(xqspi, GQSPI_GEN_FIFO_OFST, 0x0);
}
/* Since we are using manual mode */
zynqmp_gqspi_write(xqspi, GQSPI_CONFIG_OFST,
zynqmp_gqspi_read(xqspi, GQSPI_CONFIG_OFST) |
GQSPI_CFG_START_GEN_FIFO_MASK);
if (xqspi->txbuf != NULL)
/* Enable interrupts for TX */
zynqmp_gqspi_write(xqspi, GQSPI_IER_OFST,
GQSPI_IER_TXEMPTY_MASK |
GQSPI_IER_GENFIFOEMPTY_MASK |
GQSPI_IER_TXNOT_FULL_MASK);
if (xqspi->rxbuf != NULL) {
/* Enable interrupts for RX */
if (xqspi->mode == GQSPI_MODE_DMA) {
/* Enable DMA interrupts */
zynqmp_gqspi_write(xqspi,
GQSPI_QSPIDMA_DST_I_EN_OFST,
GQSPI_QSPIDMA_DST_I_EN_DONE_MASK);
} else {
zynqmp_gqspi_write(xqspi, GQSPI_IER_OFST,
GQSPI_IER_GENFIFOEMPTY_MASK |
GQSPI_IER_RXNEMPTY_MASK |
GQSPI_IER_RXEMPTY_MASK);
}
}
return transfer->len;
}
/**
* zynqmp_qspi_suspend: Suspend method for the QSPI driver
* @_dev: Address of the platform_device structure
*
* This function stops the QSPI driver queue and disables the QSPI controller
*
* Return: Always 0
*/
static int __maybe_unused zynqmp_qspi_suspend(struct device *dev)
{
struct platform_device *pdev = container_of(dev,
struct platform_device,
dev);
struct spi_master *master = platform_get_drvdata(pdev);
spi_master_suspend(master);
zynqmp_unprepare_transfer_hardware(master);
return 0;
}
/**
* zynqmp_qspi_resume: Resume method for the QSPI driver
* @dev: Address of the platform_device structure
*
* The function starts the QSPI driver queue and initializes the QSPI
* controller
*
* Return: 0 on success; error value otherwise
*/
static int __maybe_unused zynqmp_qspi_resume(struct device *dev)
{
struct platform_device *pdev = container_of(dev,
struct platform_device,
dev);
struct spi_master *master = platform_get_drvdata(pdev);
struct zynqmp_qspi *xqspi = spi_master_get_devdata(master);
int ret = 0;
ret = clk_enable(xqspi->pclk);
if (ret) {
dev_err(dev, "Cannot enable APB clock.\n");
return ret;
}
ret = clk_enable(xqspi->refclk);
if (ret) {
dev_err(dev, "Cannot enable device clock.\n");
clk_disable(xqspi->pclk);
return ret;
}
spi_master_resume(master);
return 0;
}
static SIMPLE_DEV_PM_OPS(zynqmp_qspi_dev_pm_ops, zynqmp_qspi_suspend,
zynqmp_qspi_resume);
/**
* zynqmp_qspi_probe: Probe method for the QSPI driver
* @pdev: Pointer to the platform_device structure
*
* This function initializes the driver data structures and the hardware.
*
* Return: 0 on success; error value otherwise
*/
static int zynqmp_qspi_probe(struct platform_device *pdev)
{
int ret = 0;
struct spi_master *master;
struct zynqmp_qspi *xqspi;
struct resource *res;
struct device *dev = &pdev->dev;
master = spi_alloc_master(&pdev->dev, sizeof(*xqspi));
if (!master)
return -ENOMEM;
xqspi = spi_master_get_devdata(master);
master->dev.of_node = pdev->dev.of_node;
platform_set_drvdata(pdev, master);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
xqspi->regs = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(xqspi->regs)) {
ret = PTR_ERR(xqspi->regs);
goto remove_master;
}
xqspi->dev = dev;
xqspi->pclk = devm_clk_get(&pdev->dev, "pclk");
if (IS_ERR(xqspi->pclk)) {
dev_err(dev, "pclk clock not found.\n");
ret = PTR_ERR(xqspi->pclk);
goto remove_master;
}
ret = clk_prepare_enable(xqspi->pclk);
if (ret) {
dev_err(dev, "Unable to enable APB clock.\n");
goto remove_master;
}
xqspi->refclk = devm_clk_get(&pdev->dev, "ref_clk");
if (IS_ERR(xqspi->refclk)) {
dev_err(dev, "ref_clk clock not found.\n");
ret = PTR_ERR(xqspi->refclk);
goto clk_dis_pclk;
}
ret = clk_prepare_enable(xqspi->refclk);
if (ret) {
dev_err(dev, "Unable to enable device clock.\n");
goto clk_dis_pclk;
}
/* QSPI controller initializations */
zynqmp_qspi_init_hw(xqspi);
xqspi->irq = platform_get_irq(pdev, 0);
if (xqspi->irq <= 0) {
ret = -ENXIO;
dev_err(dev, "irq resource not found\n");
goto clk_dis_all;
}
ret = devm_request_irq(&pdev->dev, xqspi->irq, zynqmp_qspi_irq,
0, pdev->name, master);
if (ret != 0) {
ret = -ENXIO;
dev_err(dev, "request_irq failed\n");
goto clk_dis_all;
}
master->num_chipselect = GQSPI_DEFAULT_NUM_CS;
master->setup = zynqmp_qspi_setup;
master->set_cs = zynqmp_qspi_chipselect;
master->transfer_one = zynqmp_qspi_start_transfer;
master->prepare_transfer_hardware = zynqmp_prepare_transfer_hardware;
master->unprepare_transfer_hardware =
zynqmp_unprepare_transfer_hardware;
master->max_speed_hz = clk_get_rate(xqspi->refclk) / 2;
master->bits_per_word_mask = SPI_BPW_MASK(8);
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_RX_DUAL | SPI_RX_QUAD |
SPI_TX_DUAL | SPI_TX_QUAD;
if (master->dev.parent == NULL)
master->dev.parent = &master->dev;
ret = spi_register_master(master);
if (ret)
goto clk_dis_all;
return 0;
clk_dis_all:
clk_disable_unprepare(xqspi->refclk);
clk_dis_pclk:
clk_disable_unprepare(xqspi->pclk);
remove_master:
spi_master_put(master);
return ret;
}
/**
* zynqmp_qspi_remove: Remove method for the QSPI driver
* @pdev: Pointer to the platform_device structure
*
* This function is called if a device is physically removed from the system or
* if the driver module is being unloaded. It frees all resources allocated to
* the device.
*
* Return: 0 Always
*/
static int zynqmp_qspi_remove(struct platform_device *pdev)
{
struct spi_master *master = platform_get_drvdata(pdev);
struct zynqmp_qspi *xqspi = spi_master_get_devdata(master);
zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, 0x0);
clk_disable_unprepare(xqspi->refclk);
clk_disable_unprepare(xqspi->pclk);
spi_unregister_master(master);
return 0;
}
static const struct of_device_id zynqmp_qspi_of_match[] = {
{ .compatible = "xlnx,zynqmp-qspi-1.0", },
{ /* End of table */ }
};
MODULE_DEVICE_TABLE(of, zynqmp_qspi_of_match);
static struct platform_driver zynqmp_qspi_driver = {
.probe = zynqmp_qspi_probe,
.remove = zynqmp_qspi_remove,
.driver = {
.name = "zynqmp-qspi",
.of_match_table = zynqmp_qspi_of_match,
.pm = &zynqmp_qspi_dev_pm_ops,
},
};
module_platform_driver(zynqmp_qspi_driver);
MODULE_AUTHOR("Xilinx, Inc.");
MODULE_DESCRIPTION("Xilinx Zynqmp QSPI driver");
MODULE_LICENSE("GPL");
...@@ -571,7 +571,7 @@ static int __spi_map_msg(struct spi_master *master, struct spi_message *msg) ...@@ -571,7 +571,7 @@ static int __spi_map_msg(struct spi_master *master, struct spi_message *msg)
return 0; return 0;
} }
static int spi_unmap_msg(struct spi_master *master, struct spi_message *msg) static int __spi_unmap_msg(struct spi_master *master, struct spi_message *msg)
{ {
struct spi_transfer *xfer; struct spi_transfer *xfer;
struct device *tx_dev, *rx_dev; struct device *tx_dev, *rx_dev;
...@@ -583,15 +583,6 @@ static int spi_unmap_msg(struct spi_master *master, struct spi_message *msg) ...@@ -583,15 +583,6 @@ static int spi_unmap_msg(struct spi_master *master, struct spi_message *msg)
rx_dev = master->dma_rx->device->dev; rx_dev = master->dma_rx->device->dev;
list_for_each_entry(xfer, &msg->transfers, transfer_list) { list_for_each_entry(xfer, &msg->transfers, transfer_list) {
/*
* Restore the original value of tx_buf or rx_buf if they are
* NULL.
*/
if (xfer->tx_buf == master->dummy_tx)
xfer->tx_buf = NULL;
if (xfer->rx_buf == master->dummy_rx)
xfer->rx_buf = NULL;
if (!master->can_dma(master, msg->spi, xfer)) if (!master->can_dma(master, msg->spi, xfer))
continue; continue;
...@@ -608,13 +599,32 @@ static inline int __spi_map_msg(struct spi_master *master, ...@@ -608,13 +599,32 @@ static inline int __spi_map_msg(struct spi_master *master,
return 0; return 0;
} }
static inline int spi_unmap_msg(struct spi_master *master, static inline int __spi_unmap_msg(struct spi_master *master,
struct spi_message *msg) struct spi_message *msg)
{ {
return 0; return 0;
} }
#endif /* !CONFIG_HAS_DMA */ #endif /* !CONFIG_HAS_DMA */
static inline int spi_unmap_msg(struct spi_master *master,
struct spi_message *msg)
{
struct spi_transfer *xfer;
list_for_each_entry(xfer, &msg->transfers, transfer_list) {
/*
* Restore the original value of tx_buf or rx_buf if they are
* NULL.
*/
if (xfer->tx_buf == master->dummy_tx)
xfer->tx_buf = NULL;
if (xfer->rx_buf == master->dummy_rx)
xfer->rx_buf = NULL;
}
return __spi_unmap_msg(master, msg);
}
static int spi_map_msg(struct spi_master *master, struct spi_message *msg) static int spi_map_msg(struct spi_master *master, struct spi_message *msg)
{ {
struct spi_transfer *xfer; struct spi_transfer *xfer;
...@@ -988,9 +998,6 @@ void spi_finalize_current_message(struct spi_master *master) ...@@ -988,9 +998,6 @@ void spi_finalize_current_message(struct spi_master *master)
spin_lock_irqsave(&master->queue_lock, flags); spin_lock_irqsave(&master->queue_lock, flags);
mesg = master->cur_msg; mesg = master->cur_msg;
master->cur_msg = NULL;
queue_kthread_work(&master->kworker, &master->pump_messages);
spin_unlock_irqrestore(&master->queue_lock, flags); spin_unlock_irqrestore(&master->queue_lock, flags);
spi_unmap_msg(master, mesg); spi_unmap_msg(master, mesg);
...@@ -1003,9 +1010,13 @@ void spi_finalize_current_message(struct spi_master *master) ...@@ -1003,9 +1010,13 @@ void spi_finalize_current_message(struct spi_master *master)
} }
} }
trace_spi_message_done(mesg); spin_lock_irqsave(&master->queue_lock, flags);
master->cur_msg = NULL;
master->cur_msg_prepared = false; master->cur_msg_prepared = false;
queue_kthread_work(&master->kworker, &master->pump_messages);
spin_unlock_irqrestore(&master->queue_lock, flags);
trace_spi_message_done(mesg);
mesg->state = NULL; mesg->state = NULL;
if (mesg->complete) if (mesg->complete)
......
...@@ -95,37 +95,25 @@ MODULE_PARM_DESC(bufsiz, "data bytes in biggest supported SPI message"); ...@@ -95,37 +95,25 @@ MODULE_PARM_DESC(bufsiz, "data bytes in biggest supported SPI message");
/*-------------------------------------------------------------------------*/ /*-------------------------------------------------------------------------*/
/*
* We can't use the standard synchronous wrappers for file I/O; we
* need to protect against async removal of the underlying spi_device.
*/
static void spidev_complete(void *arg)
{
complete(arg);
}
static ssize_t static ssize_t
spidev_sync(struct spidev_data *spidev, struct spi_message *message) spidev_sync(struct spidev_data *spidev, struct spi_message *message)
{ {
DECLARE_COMPLETION_ONSTACK(done); DECLARE_COMPLETION_ONSTACK(done);
int status; int status;
struct spi_device *spi;
message->complete = spidev_complete;
message->context = &done;
spin_lock_irq(&spidev->spi_lock); spin_lock_irq(&spidev->spi_lock);
if (spidev->spi == NULL) spi = spidev->spi;
spin_unlock_irq(&spidev->spi_lock);
if (spi == NULL)
status = -ESHUTDOWN; status = -ESHUTDOWN;
else else
status = spi_async(spidev->spi, message); status = spi_sync(spi, message);
spin_unlock_irq(&spidev->spi_lock);
if (status == 0)
status = message->actual_length;
if (status == 0) {
wait_for_completion(&done);
status = message->status;
if (status == 0)
status = message->actual_length;
}
return status; return status;
} }
...@@ -647,7 +635,6 @@ static int spidev_open(struct inode *inode, struct file *filp) ...@@ -647,7 +635,6 @@ static int spidev_open(struct inode *inode, struct file *filp)
static int spidev_release(struct inode *inode, struct file *filp) static int spidev_release(struct inode *inode, struct file *filp)
{ {
struct spidev_data *spidev; struct spidev_data *spidev;
int status = 0;
mutex_lock(&device_list_lock); mutex_lock(&device_list_lock);
spidev = filp->private_data; spidev = filp->private_data;
...@@ -676,7 +663,7 @@ static int spidev_release(struct inode *inode, struct file *filp) ...@@ -676,7 +663,7 @@ static int spidev_release(struct inode *inode, struct file *filp)
} }
mutex_unlock(&device_list_lock); mutex_unlock(&device_list_lock);
return status; return 0;
} }
static const struct file_operations spidev_fops = { static const struct file_operations spidev_fops = {
......
...@@ -194,8 +194,9 @@ enum pxa_ssp_type { ...@@ -194,8 +194,9 @@ enum pxa_ssp_type {
PXA168_SSP, PXA168_SSP,
PXA910_SSP, PXA910_SSP,
CE4100_SSP, CE4100_SSP,
LPSS_SSP,
QUARK_X1000_SSP, QUARK_X1000_SSP,
LPSS_LPT_SSP, /* Keep LPSS types sorted with lpss_platforms[] */
LPSS_BYT_SSP,
}; };
struct ssp_device { struct ssp_device {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment