Commit 64cbd16a authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'mmc-v4.9' of git://git.linaro.org/people/ulf.hansson/mmc

Pull MMC updates from Ulf Hansson:

  MMC core:
   - Add support for sending commands during data transfer
   - Erase/discard/trim improvements
   - Improved error handling
   - Extend sysfs with SD status register
   - Document info about the vmmc/vmmcq regulators
   - Extend pwrseq-simple to manage an optional post-power-on-delay
   - Some various minor improvements and cleanups

  MMC host:
   - dw_mmc: Add reset support
   - dw_mmc: Return -EILSEQ for EBE and SBE error
   - dw_mmc: Some cleanups
   - dw_mmc-k3: Add UHS-I support Hisilicon Hikey
   - tmio: Add eMMC support
   - sh_mobile_sdhi: Add r8a7796 support
   - sunxi: Don't use sample clocks for sun4i/sun5i
   - sunxi: Add support for A64 mmc controller
   - sunxi: Some cleanups and improvements
   - sdhci: Support for sending commands during data transfer
   - sdhci: Do not allow tuning procedure to be interrupted
   - sdhci-pci: Enable SD/SDIO on Merrifield
   - sdhci-pci|acpi: Enable MMC_CAP_CMD_DURING_TFR
   - sdhci-pci: Some cleanups
   - sdhci-of-arasan: Set controller to test mode when no CD bit
   - sdhci-of-arasan: Some fixes for clocks and phys
   - sdhci-brcmstb: Don't use ADMA 64-bit when not supported
   - sdhci-tegra: Mark 64-bit DMA broken on Tegra124
   - sdhci-esdhc-imx: Fixups related to data timeouts

* tag 'mmc-v4.9' of git://git.linaro.org/people/ulf.hansson/mmc: (68 commits)
  mmc: dw_mmc: remove the deprecated "supports-highspeed" property
  mmc: dw_mmc: minor cleanup for dw_mci_adjust_fifoth
  mmc: dw_mmc: use macro to define ring buffer size
  mmc: dw_mmc: fix misleading error print if failing to do DMA transfer
  mmc: dw_mmc: avoid race condition of cpu and IDMAC
  mmc: dw_mmc: split out preparation of desc for IDMAC32 and IDMAC64
  mmc: core: don't try to switch block size for dual rate mode
  mmc: sdhci-of-arasan: Set controller to test mode when no CD bit
  dt: sdhci-of-arasan: Add device tree option xlnx, fails-without-test-cd
  mmc: tmio: add eMMC support
  mmc: rtsx_usb: use new macro for R1 without CRC
  mmc: rtsx_pci: use new macro for R1 without CRC
  mmc: add define for R1 response without CRC
  mmc: card: do away with indirection pointer
  mmc: sdhci-acpi: Set MMC_CAP_CMD_DURING_TFR for Intel eMMC controllers
  mmc: sdhci-pci: Set MMC_CAP_CMD_DURING_TFR for Intel eMMC controllers
  mmc: sdhci: Support cap_cmd_during_tfr requests
  mmc: mmc_test: Add tests for sending commands during transfer
  mmc: core: Add support for sending commands during data transfer
  mmc: sdhci-brcmstb: Fix incorrect capability
  ...
parents edadd0e5 0f75c404
......@@ -36,6 +36,9 @@ Optional Properties:
- #clock-cells: If specified this should be the value <0>. With this property
in place we will export a clock representing the Card Clock. This clock
is expected to be consumed by our PHY. You must also specify
- xlnx,fails-without-test-cd: when present, the controller doesn't work when
the CD line is not connected properly, and the line is not connected
properly. Test mode can be used to force the controller to function.
Example:
sdhci@e0100000 {
......
......@@ -8,7 +8,9 @@ on Device Tree properties to enable them for SoC/Board combinations
that support them.
Required properties:
- compatible: "brcm,bcm7425-sdhci"
- compatible: should be one of the following
- "brcm,bcm7425-sdhci"
- "brcm,bcm7445-sdhci"
Refer to clocks/clock-bindings.txt for generic clock consumer properties.
......
......@@ -16,6 +16,8 @@ Optional properties:
See ../clocks/clock-bindings.txt for details.
- clock-names : Must include the following entry:
"ext_clock" (External clock provided to the card).
- post-power-on-delay-ms : Delay in ms after powering the card and
de-asserting the reset-gpios (if any)
Example:
......
......@@ -75,6 +75,17 @@ Optional SDIO properties:
- wakeup-source: Enables wake up of host system on SDIO IRQ assertion
(Legacy property supported: "enable-sdio-wakeup")
MMC power
---------
Controllers may implement power control from both the connected cards and
the IO signaling (for example to change to high-speed 1.8V signalling). If
the system supports this, then the following two properties should point
to valid regulator nodes:
- vqmmc-supply: supply node for IO line power
- vmmc-supply: supply node for card's power
MMC power sequences:
--------------------
......@@ -102,11 +113,13 @@ Required host node properties when using function subnodes:
- #size-cells: should be zero.
Required function subnode properties:
- compatible: name of SDIO function following generic names recommended practice
- reg: Must contain the SDIO function number of the function this subnode
describes. A value of 0 denotes the memory SD function, values from
1 to 7 denote the SDIO functions.
Optional function subnode properties:
- compatible: name of SDIO function following generic names recommended practice
Examples
--------
......
......@@ -8,7 +8,12 @@ as the speed of SD standard 3.0.
Absolute maximum transfer rate is 200MB/s
Required properties:
- compatible : "allwinner,sun4i-a10-mmc" or "allwinner,sun5i-a13-mmc"
- compatible : should be one of:
* "allwinner,sun4i-a10-mmc"
* "allwinner,sun5i-a13-mmc"
* "allwinner,sun7i-a20-mmc"
* "allwinner,sun9i-a80-mmc"
* "allwinner,sun50i-a64-mmc"
- reg : mmc controller base registers
- clocks : a list with 4 phandle + clock specifier pairs
- clock-names : must contain "ahb", "mmc", "output" and "sample"
......
......@@ -39,6 +39,10 @@ Required Properties:
Optional properties:
* resets: phandle + reset specifier pair, intended to represent hardware
reset signal present internally in some host controller IC designs.
See Documentation/devicetree/bindings/reset/reset.txt for details.
* clocks: from common clock binding: handle to biu and ciu clocks for the
bus interface unit clock and the card interface unit clock.
......
......@@ -23,6 +23,7 @@ Required properties:
"renesas,sdhi-r8a7793" - SDHI IP on R8A7793 SoC
"renesas,sdhi-r8a7794" - SDHI IP on R8A7794 SoC
"renesas,sdhi-r8a7795" - SDHI IP on R8A7795 SoC
"renesas,sdhi-r8a7796" - SDHI IP on R8A7796 SoC
Optional properties:
- toshiba,mmc-wrprotect-disable: write-protect detection is unavailable
......
......@@ -469,7 +469,7 @@ dma: dma-controller@01c02000 {
};
mmc0: mmc@01c0f000 {
compatible = "allwinner,sun5i-a13-mmc";
compatible = "allwinner,sun7i-a20-mmc";
reg = <0x01c0f000 0x1000>;
clocks = <&ahb1_gates 8>,
<&mmc0_clk 0>,
......@@ -488,7 +488,7 @@ mmc0: mmc@01c0f000 {
};
mmc1: mmc@01c10000 {
compatible = "allwinner,sun5i-a13-mmc";
compatible = "allwinner,sun7i-a20-mmc";
reg = <0x01c10000 0x1000>;
clocks = <&ahb1_gates 9>,
<&mmc1_clk 0>,
......@@ -507,7 +507,7 @@ mmc1: mmc@01c10000 {
};
mmc2: mmc@01c11000 {
compatible = "allwinner,sun5i-a13-mmc";
compatible = "allwinner,sun7i-a20-mmc";
reg = <0x01c11000 0x1000>;
clocks = <&ahb1_gates 10>,
<&mmc2_clk 0>,
......@@ -526,7 +526,7 @@ mmc2: mmc@01c11000 {
};
mmc3: mmc@01c12000 {
compatible = "allwinner,sun5i-a13-mmc";
compatible = "allwinner,sun7i-a20-mmc";
reg = <0x01c12000 0x1000>;
clocks = <&ahb1_gates 11>,
<&mmc3_clk 0>,
......
......@@ -905,7 +905,7 @@ mdio: mdio@01c0b080 {
};
mmc0: mmc@01c0f000 {
compatible = "allwinner,sun5i-a13-mmc";
compatible = "allwinner,sun7i-a20-mmc";
reg = <0x01c0f000 0x1000>;
clocks = <&ahb_gates 8>,
<&mmc0_clk 0>,
......@@ -922,7 +922,7 @@ mmc0: mmc@01c0f000 {
};
mmc1: mmc@01c10000 {
compatible = "allwinner,sun5i-a13-mmc";
compatible = "allwinner,sun7i-a20-mmc";
reg = <0x01c10000 0x1000>;
clocks = <&ahb_gates 9>,
<&mmc1_clk 0>,
......@@ -939,7 +939,7 @@ mmc1: mmc@01c10000 {
};
mmc2: mmc@01c11000 {
compatible = "allwinner,sun5i-a13-mmc";
compatible = "allwinner,sun7i-a20-mmc";
reg = <0x01c11000 0x1000>;
clocks = <&ahb_gates 10>,
<&mmc2_clk 0>,
......@@ -956,7 +956,7 @@ mmc2: mmc@01c11000 {
};
mmc3: mmc@01c12000 {
compatible = "allwinner,sun5i-a13-mmc";
compatible = "allwinner,sun7i-a20-mmc";
reg = <0x01c12000 0x1000>;
clocks = <&ahb_gates 11>,
<&mmc3_clk 0>,
......
......@@ -266,7 +266,7 @@ dma: dma-controller@01c02000 {
};
mmc0: mmc@01c0f000 {
compatible = "allwinner,sun5i-a13-mmc";
compatible = "allwinner,sun7i-a20-mmc";
reg = <0x01c0f000 0x1000>;
clocks = <&ahb1_gates 8>,
<&mmc0_clk 0>,
......@@ -285,7 +285,7 @@ mmc0: mmc@01c0f000 {
};
mmc1: mmc@01c10000 {
compatible = "allwinner,sun5i-a13-mmc";
compatible = "allwinner,sun7i-a20-mmc";
reg = <0x01c10000 0x1000>;
clocks = <&ahb1_gates 9>,
<&mmc1_clk 0>,
......@@ -304,7 +304,7 @@ mmc1: mmc@01c10000 {
};
mmc2: mmc@01c11000 {
compatible = "allwinner,sun5i-a13-mmc";
compatible = "allwinner,sun7i-a20-mmc";
reg = <0x01c11000 0x1000>;
clocks = <&ahb1_gates 10>,
<&mmc2_clk 0>,
......
......@@ -150,7 +150,7 @@ dma: dma-controller@01c02000 {
};
mmc0: mmc@01c0f000 {
compatible = "allwinner,sun5i-a13-mmc";
compatible = "allwinner,sun7i-a20-mmc";
reg = <0x01c0f000 0x1000>;
clocks = <&ccu CLK_BUS_MMC0>,
<&ccu CLK_MMC0>,
......@@ -169,7 +169,7 @@ mmc0: mmc@01c0f000 {
};
mmc1: mmc@01c10000 {
compatible = "allwinner,sun5i-a13-mmc";
compatible = "allwinner,sun7i-a20-mmc";
reg = <0x01c10000 0x1000>;
clocks = <&ccu CLK_BUS_MMC1>,
<&ccu CLK_MMC1>,
......@@ -188,7 +188,7 @@ mmc1: mmc@01c10000 {
};
mmc2: mmc@01c11000 {
compatible = "allwinner,sun5i-a13-mmc";
compatible = "allwinner,sun7i-a20-mmc";
reg = <0x01c11000 0x1000>;
clocks = <&ccu CLK_BUS_MMC2>,
<&ccu CLK_MMC2>,
......
......@@ -142,8 +142,6 @@ static inline void mmc_blk_clear_packed(struct mmc_queue_req *mqrq)
{
struct mmc_packed *packed = mqrq->packed;
BUG_ON(!packed);
mqrq->cmd_type = MMC_PACKED_NONE;
packed->nr_entries = MMC_PACKED_NR_ZERO;
packed->idx_failure = MMC_PACKED_NR_IDX;
......@@ -1443,8 +1441,6 @@ static int mmc_blk_packed_err_check(struct mmc_card *card,
int err, check, status;
u8 *ext_csd;
BUG_ON(!packed);
packed->retries--;
check = mmc_blk_err_check(card, areq);
err = get_card_status(card, &status, 0);
......@@ -1673,6 +1669,18 @@ static u8 mmc_blk_prep_packed_list(struct mmc_queue *mq, struct request *req)
u8 max_packed_rw = 0;
u8 reqs = 0;
/*
* We don't need to check packed for any further
* operation of packed stuff as we set MMC_PACKED_NONE
* and return zero for reqs if geting null packed. Also
* we clean the flag of MMC_BLK_PACKED_CMD to avoid doing
* it again when removing blk req.
*/
if (!mqrq->packed) {
md->flags &= (~MMC_BLK_PACKED_CMD);
goto no_packed;
}
if (!(md->flags & MMC_BLK_PACKED_CMD))
goto no_packed;
......@@ -1782,8 +1790,6 @@ static void mmc_blk_packed_hdr_wrq_prep(struct mmc_queue_req *mqrq,
u8 hdr_blocks;
u8 i = 1;
BUG_ON(!packed);
mqrq->cmd_type = MMC_PACKED_WRITE;
packed->blocks = 0;
packed->idx_failure = MMC_PACKED_NR_IDX;
......@@ -1887,8 +1893,6 @@ static int mmc_blk_end_packed_req(struct mmc_queue_req *mq_rq)
int idx = packed->idx_failure, i = 0;
int ret = 0;
BUG_ON(!packed);
while (!list_empty(&packed->list)) {
prq = list_entry_rq(packed->list.next);
if (idx == i) {
......@@ -1917,8 +1921,6 @@ static void mmc_blk_abort_packed_req(struct mmc_queue_req *mq_rq)
struct request *prq;
struct mmc_packed *packed = mq_rq->packed;
BUG_ON(!packed);
while (!list_empty(&packed->list)) {
prq = list_entry_rq(packed->list.next);
list_del_init(&prq->queuelist);
......@@ -1935,8 +1937,6 @@ static void mmc_blk_revert_packed_req(struct mmc_queue *mq,
struct request_queue *q = mq->queue;
struct mmc_packed *packed = mq_rq->packed;
BUG_ON(!packed);
while (!list_empty(&packed->list)) {
prq = list_entry_rq(packed->list.prev);
if (prq->queuelist.prev != &packed->list) {
......@@ -2144,7 +2144,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
return 0;
}
static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
{
int ret;
struct mmc_blk_data *md = mq->data;
......@@ -2265,7 +2265,6 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
if (ret)
goto err_putdisk;
md->queue.issue_fn = mmc_blk_issue_rq;
md->queue.data = md;
md->disk->major = MMC_BLOCK_MAJOR;
......@@ -2303,7 +2302,8 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
set_capacity(md->disk, size);
if (mmc_host_cmd23(card->host)) {
if (mmc_card_mmc(card) ||
if ((mmc_card_mmc(card) &&
card->csd.mmca_vsn >= CSD_SPEC_VER_3) ||
(mmc_card_sd(card) &&
card->scr.cmds & SD_SCR_CMD23_SUPPORT))
md->flags |= MMC_BLK_CMD23;
......
int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req);
......@@ -184,6 +184,29 @@ static int mmc_test_set_blksize(struct mmc_test_card *test, unsigned size)
return mmc_set_blocklen(test->card, size);
}
static bool mmc_test_card_cmd23(struct mmc_card *card)
{
return mmc_card_mmc(card) ||
(mmc_card_sd(card) && card->scr.cmds & SD_SCR_CMD23_SUPPORT);
}
static void mmc_test_prepare_sbc(struct mmc_test_card *test,
struct mmc_request *mrq, unsigned int blocks)
{
struct mmc_card *card = test->card;
if (!mrq->sbc || !mmc_host_cmd23(card->host) ||
!mmc_test_card_cmd23(card) || !mmc_op_multi(mrq->cmd->opcode) ||
(card->quirks & MMC_QUIRK_BLK_NO_CMD23)) {
mrq->sbc = NULL;
return;
}
mrq->sbc->opcode = MMC_SET_BLOCK_COUNT;
mrq->sbc->arg = blocks;
mrq->sbc->flags = MMC_RSP_R1 | MMC_CMD_AC;
}
/*
* Fill in the mmc_request structure given a set of transfer parameters.
*/
......@@ -221,6 +244,8 @@ static void mmc_test_prepare_mrq(struct mmc_test_card *test,
mrq->data->sg = sg;
mrq->data->sg_len = sg_len;
mmc_test_prepare_sbc(test, mrq, blocks);
mmc_set_data_timeout(mrq->data, test->card);
}
......@@ -693,6 +718,8 @@ static int mmc_test_check_result(struct mmc_test_card *test,
ret = 0;
if (mrq->sbc && mrq->sbc->error)
ret = mrq->sbc->error;
if (!ret && mrq->cmd->error)
ret = mrq->cmd->error;
if (!ret && mrq->data->error)
......@@ -2278,6 +2305,245 @@ static int mmc_test_reset(struct mmc_test_card *test)
return RESULT_FAIL;
}
struct mmc_test_req {
struct mmc_request mrq;
struct mmc_command sbc;
struct mmc_command cmd;
struct mmc_command stop;
struct mmc_command status;
struct mmc_data data;
};
static struct mmc_test_req *mmc_test_req_alloc(void)
{
struct mmc_test_req *rq = kzalloc(sizeof(*rq), GFP_KERNEL);
if (rq) {
rq->mrq.cmd = &rq->cmd;
rq->mrq.data = &rq->data;
rq->mrq.stop = &rq->stop;
}
return rq;
}
static int mmc_test_send_status(struct mmc_test_card *test,
struct mmc_command *cmd)
{
memset(cmd, 0, sizeof(*cmd));
cmd->opcode = MMC_SEND_STATUS;
if (!mmc_host_is_spi(test->card->host))
cmd->arg = test->card->rca << 16;
cmd->flags = MMC_RSP_SPI_R2 | MMC_RSP_R1 | MMC_CMD_AC;
return mmc_wait_for_cmd(test->card->host, cmd, 0);
}
static int mmc_test_ongoing_transfer(struct mmc_test_card *test,
unsigned int dev_addr, int use_sbc,
int repeat_cmd, int write, int use_areq)
{
struct mmc_test_req *rq = mmc_test_req_alloc();
struct mmc_host *host = test->card->host;
struct mmc_test_area *t = &test->area;
struct mmc_async_req areq;
struct mmc_request *mrq;
unsigned long timeout;
bool expired = false;
int ret = 0, cmd_ret;
u32 status = 0;
int count = 0;
if (!rq)
return -ENOMEM;
mrq = &rq->mrq;
if (use_sbc)
mrq->sbc = &rq->sbc;
mrq->cap_cmd_during_tfr = true;
areq.mrq = mrq;
areq.err_check = mmc_test_check_result_async;
mmc_test_prepare_mrq(test, mrq, t->sg, t->sg_len, dev_addr, t->blocks,
512, write);
if (use_sbc && t->blocks > 1 && !mrq->sbc) {
ret = mmc_host_cmd23(host) ?
RESULT_UNSUP_CARD :
RESULT_UNSUP_HOST;
goto out_free;
}
/* Start ongoing data request */
if (use_areq) {
mmc_start_req(host, &areq, &ret);
if (ret)
goto out_free;
} else {
mmc_wait_for_req(host, mrq);
}
timeout = jiffies + msecs_to_jiffies(3000);
do {
count += 1;
/* Send status command while data transfer in progress */
cmd_ret = mmc_test_send_status(test, &rq->status);
if (cmd_ret)
break;
status = rq->status.resp[0];
if (status & R1_ERROR) {
cmd_ret = -EIO;
break;
}
if (mmc_is_req_done(host, mrq))
break;
expired = time_after(jiffies, timeout);
if (expired) {
pr_info("%s: timeout waiting for Tran state status %#x\n",
mmc_hostname(host), status);
cmd_ret = -ETIMEDOUT;
break;
}
} while (repeat_cmd && R1_CURRENT_STATE(status) != R1_STATE_TRAN);
/* Wait for data request to complete */
if (use_areq)
mmc_start_req(host, NULL, &ret);
else
mmc_wait_for_req_done(test->card->host, mrq);
/*
* For cap_cmd_during_tfr request, upper layer must send stop if
* required.
*/
if (mrq->data->stop && (mrq->data->error || !mrq->sbc)) {
if (ret)
mmc_wait_for_cmd(host, mrq->data->stop, 0);
else
ret = mmc_wait_for_cmd(host, mrq->data->stop, 0);
}
if (ret)
goto out_free;
if (cmd_ret) {
pr_info("%s: Send Status failed: status %#x, error %d\n",
mmc_hostname(test->card->host), status, cmd_ret);
}
ret = mmc_test_check_result(test, mrq);
if (ret)
goto out_free;
ret = mmc_test_wait_busy(test);
if (ret)
goto out_free;
if (repeat_cmd && (t->blocks + 1) << 9 > t->max_tfr)
pr_info("%s: %d commands completed during transfer of %u blocks\n",
mmc_hostname(test->card->host), count, t->blocks);
if (cmd_ret)
ret = cmd_ret;
out_free:
kfree(rq);
return ret;
}
static int __mmc_test_cmds_during_tfr(struct mmc_test_card *test,
unsigned long sz, int use_sbc, int write,
int use_areq)
{
struct mmc_test_area *t = &test->area;
int ret;
if (!(test->card->host->caps & MMC_CAP_CMD_DURING_TFR))
return RESULT_UNSUP_HOST;
ret = mmc_test_area_map(test, sz, 0, 0);
if (ret)
return ret;
ret = mmc_test_ongoing_transfer(test, t->dev_addr, use_sbc, 0, write,
use_areq);
if (ret)
return ret;
return mmc_test_ongoing_transfer(test, t->dev_addr, use_sbc, 1, write,
use_areq);
}
static int mmc_test_cmds_during_tfr(struct mmc_test_card *test, int use_sbc,
int write, int use_areq)
{
struct mmc_test_area *t = &test->area;
unsigned long sz;
int ret;
for (sz = 512; sz <= t->max_tfr; sz += 512) {
ret = __mmc_test_cmds_during_tfr(test, sz, use_sbc, write,
use_areq);
if (ret)
return ret;
}
return 0;
}
/*
* Commands during read - no Set Block Count (CMD23).
*/
static int mmc_test_cmds_during_read(struct mmc_test_card *test)
{
return mmc_test_cmds_during_tfr(test, 0, 0, 0);
}
/*
* Commands during write - no Set Block Count (CMD23).
*/
static int mmc_test_cmds_during_write(struct mmc_test_card *test)
{
return mmc_test_cmds_during_tfr(test, 0, 1, 0);
}
/*
* Commands during read - use Set Block Count (CMD23).
*/
static int mmc_test_cmds_during_read_cmd23(struct mmc_test_card *test)
{
return mmc_test_cmds_during_tfr(test, 1, 0, 0);
}
/*
* Commands during write - use Set Block Count (CMD23).
*/
static int mmc_test_cmds_during_write_cmd23(struct mmc_test_card *test)
{
return mmc_test_cmds_during_tfr(test, 1, 1, 0);
}
/*
* Commands during non-blocking read - use Set Block Count (CMD23).
*/
static int mmc_test_cmds_during_read_cmd23_nonblock(struct mmc_test_card *test)
{
return mmc_test_cmds_during_tfr(test, 1, 0, 1);
}
/*
* Commands during non-blocking write - use Set Block Count (CMD23).
*/
static int mmc_test_cmds_during_write_cmd23_nonblock(struct mmc_test_card *test)
{
return mmc_test_cmds_during_tfr(test, 1, 1, 1);
}
static const struct mmc_test_case mmc_test_cases[] = {
{
.name = "Basic write (no data verification)",
......@@ -2605,6 +2871,48 @@ static const struct mmc_test_case mmc_test_cases[] = {
.name = "Reset test",
.run = mmc_test_reset,
},
{
.name = "Commands during read - no Set Block Count (CMD23)",
.prepare = mmc_test_area_prepare,
.run = mmc_test_cmds_during_read,
.cleanup = mmc_test_area_cleanup,
},
{
.name = "Commands during write - no Set Block Count (CMD23)",
.prepare = mmc_test_area_prepare,
.run = mmc_test_cmds_during_write,
.cleanup = mmc_test_area_cleanup,
},
{
.name = "Commands during read - use Set Block Count (CMD23)",
.prepare = mmc_test_area_prepare,
.run = mmc_test_cmds_during_read_cmd23,
.cleanup = mmc_test_area_cleanup,
},
{
.name = "Commands during write - use Set Block Count (CMD23)",
.prepare = mmc_test_area_prepare,
.run = mmc_test_cmds_during_write_cmd23,
.cleanup = mmc_test_area_cleanup,
},
{
.name = "Commands during non-blocking read - use Set Block Count (CMD23)",
.prepare = mmc_test_area_prepare,
.run = mmc_test_cmds_during_read_cmd23_nonblock,
.cleanup = mmc_test_area_cleanup,
},
{
.name = "Commands during non-blocking write - use Set Block Count (CMD23)",
.prepare = mmc_test_area_prepare,
.run = mmc_test_cmds_during_write_cmd23_nonblock,
.cleanup = mmc_test_area_cleanup,
},
};
static DEFINE_MUTEX(mmc_test_lock);
......
......@@ -19,7 +19,9 @@
#include <linux/mmc/card.h>
#include <linux/mmc/host.h>
#include "queue.h"
#include "block.h"
#define MMC_QUEUE_BOUNCESZ 65536
......@@ -68,7 +70,7 @@ static int mmc_queue_thread(void *d)
bool req_is_special = mmc_req_is_special(req);
set_current_state(TASK_RUNNING);
mq->issue_fn(mq, req);
mmc_blk_issue_rq(mq, req);
cond_resched();
if (mq->flags & MMC_QUEUE_NEW_REQUEST) {
mq->flags &= ~MMC_QUEUE_NEW_REQUEST;
......
......@@ -57,8 +57,6 @@ struct mmc_queue {
unsigned int flags;
#define MMC_QUEUE_SUSPENDED (1 << 0)
#define MMC_QUEUE_NEW_REQUEST (1 << 1)
int (*issue_fn)(struct mmc_queue *, struct request *);
void *data;
struct request_queue *queue;
struct mmc_queue_req mqrq[2];
......
......@@ -58,6 +58,9 @@
*/
#define MMC_BKOPS_MAX_TIMEOUT (4 * 60 * 1000) /* max time to wait in ms */
/* The max erase timeout, used when host->max_busy_timeout isn't specified */
#define MMC_ERASE_TIMEOUT_MS (60 * 1000) /* 60 s */
static const unsigned freqs[] = { 400000, 300000, 200000, 100000 };
/*
......@@ -117,6 +120,24 @@ static inline void mmc_should_fail_request(struct mmc_host *host,
#endif /* CONFIG_FAIL_MMC_REQUEST */
static inline void mmc_complete_cmd(struct mmc_request *mrq)
{
if (mrq->cap_cmd_during_tfr && !completion_done(&mrq->cmd_completion))
complete_all(&mrq->cmd_completion);
}
void mmc_command_done(struct mmc_host *host, struct mmc_request *mrq)
{
if (!mrq->cap_cmd_during_tfr)
return;
mmc_complete_cmd(mrq);
pr_debug("%s: cmd done, tfr ongoing (CMD%u)\n",
mmc_hostname(host), mrq->cmd->opcode);
}
EXPORT_SYMBOL(mmc_command_done);
/**
* mmc_request_done - finish processing an MMC request
* @host: MMC host which completed request
......@@ -143,6 +164,11 @@ void mmc_request_done(struct mmc_host *host, struct mmc_request *mrq)
cmd->retries = 0;
}
if (host->ongoing_mrq == mrq)
host->ongoing_mrq = NULL;
mmc_complete_cmd(mrq);
trace_mmc_request_done(host, mrq);
if (err && cmd->retries && !mmc_card_removed(host->card)) {
......@@ -155,7 +181,8 @@ void mmc_request_done(struct mmc_host *host, struct mmc_request *mrq)
} else {
mmc_should_fail_request(host, mrq);
led_trigger_event(host->led, LED_OFF);
if (!host->ongoing_mrq)
led_trigger_event(host->led, LED_OFF);
if (mrq->sbc) {
pr_debug("%s: req done <CMD%u>: %d: %08x %08x %08x %08x\n",
......@@ -220,6 +247,15 @@ static void __mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
}
}
if (mrq->cap_cmd_during_tfr) {
host->ongoing_mrq = mrq;
/*
* Retry path could come through here without having waiting on
* cmd_completion, so ensure it is reinitialised.
*/
reinit_completion(&mrq->cmd_completion);
}
trace_mmc_request_start(host, mrq);
host->ops->request(host, mrq);
......@@ -386,6 +422,18 @@ static void mmc_wait_done(struct mmc_request *mrq)
complete(&mrq->completion);
}
static inline void mmc_wait_ongoing_tfr_cmd(struct mmc_host *host)
{
struct mmc_request *ongoing_mrq = READ_ONCE(host->ongoing_mrq);
/*
* If there is an ongoing transfer, wait for the command line to become
* available.
*/
if (ongoing_mrq && !completion_done(&ongoing_mrq->cmd_completion))
wait_for_completion(&ongoing_mrq->cmd_completion);
}
/*
*__mmc_start_data_req() - starts data request
* @host: MMC host to start the request
......@@ -393,17 +441,24 @@ static void mmc_wait_done(struct mmc_request *mrq)
*
* Sets the done callback to be called when request is completed by the card.
* Starts data mmc request execution
* If an ongoing transfer is already in progress, wait for the command line
* to become available before sending another command.
*/
static int __mmc_start_data_req(struct mmc_host *host, struct mmc_request *mrq)
{
int err;
mmc_wait_ongoing_tfr_cmd(host);
mrq->done = mmc_wait_data_done;
mrq->host = host;
init_completion(&mrq->cmd_completion);
err = mmc_start_request(host, mrq);
if (err) {
mrq->cmd->error = err;
mmc_complete_cmd(mrq);
mmc_wait_data_done(mrq);
}
......@@ -414,12 +469,17 @@ static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq)
{
int err;
mmc_wait_ongoing_tfr_cmd(host);
init_completion(&mrq->completion);
mrq->done = mmc_wait_done;
init_completion(&mrq->cmd_completion);
err = mmc_start_request(host, mrq);
if (err) {
mrq->cmd->error = err;
mmc_complete_cmd(mrq);
complete(&mrq->completion);
}
......@@ -483,8 +543,7 @@ static int mmc_wait_for_data_req_done(struct mmc_host *host,
return err;
}
static void mmc_wait_for_req_done(struct mmc_host *host,
struct mmc_request *mrq)
void mmc_wait_for_req_done(struct mmc_host *host, struct mmc_request *mrq)
{
struct mmc_command *cmd;
......@@ -525,6 +584,28 @@ static void mmc_wait_for_req_done(struct mmc_host *host,
mmc_retune_release(host);
}
EXPORT_SYMBOL(mmc_wait_for_req_done);
/**
* mmc_is_req_done - Determine if a 'cap_cmd_during_tfr' request is done
* @host: MMC host
* @mrq: MMC request
*
* mmc_is_req_done() is used with requests that have
* mrq->cap_cmd_during_tfr = true. mmc_is_req_done() must be called after
* starting a request and before waiting for it to complete. That is,
* either in between calls to mmc_start_req(), or after mmc_wait_for_req()
* and before mmc_wait_for_req_done(). If it is called at other times the
* result is not meaningful.
*/
bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq)
{
if (host->areq)
return host->context_info.is_done_rcv;
else
return completion_done(&mrq->completion);
}
EXPORT_SYMBOL(mmc_is_req_done);
/**
* mmc_pre_req - Prepare for a new request
......@@ -645,13 +726,18 @@ EXPORT_SYMBOL(mmc_start_req);
* @mrq: MMC request to start
*
* Start a new MMC custom command request for a host, and wait
* for the command to complete. Does not attempt to parse the
* response.
* for the command to complete. In the case of 'cap_cmd_during_tfr'
* requests, the transfer is ongoing and the caller can issue further
* commands that do not use the data lines, and then wait by calling
* mmc_wait_for_req_done().
* Does not attempt to parse the response.
*/
void mmc_wait_for_req(struct mmc_host *host, struct mmc_request *mrq)
{
__mmc_start_req(host, mrq);
mmc_wait_for_req_done(host, mrq);
if (!mrq->cap_cmd_during_tfr)
mmc_wait_for_req_done(host, mrq);
}
EXPORT_SYMBOL(mmc_wait_for_req);
......@@ -2202,6 +2288,54 @@ static int mmc_do_erase(struct mmc_card *card, unsigned int from,
return err;
}
static unsigned int mmc_align_erase_size(struct mmc_card *card,
unsigned int *from,
unsigned int *to,
unsigned int nr)
{
unsigned int from_new = *from, nr_new = nr, rem;
/*
* When the 'card->erase_size' is power of 2, we can use round_up/down()
* to align the erase size efficiently.
*/
if (is_power_of_2(card->erase_size)) {
unsigned int temp = from_new;
from_new = round_up(temp, card->erase_size);
rem = from_new - temp;
if (nr_new > rem)
nr_new -= rem;
else
return 0;
nr_new = round_down(nr_new, card->erase_size);
} else {
rem = from_new % card->erase_size;
if (rem) {
rem = card->erase_size - rem;
from_new += rem;
if (nr_new > rem)
nr_new -= rem;
else
return 0;
}
rem = nr_new % card->erase_size;
if (rem)
nr_new -= rem;
}
if (nr_new == 0)
return 0;
*to = from_new + nr_new;
*from = from_new;
return nr_new;
}
/**
* mmc_erase - erase sectors.
* @card: card to erase
......@@ -2240,26 +2374,12 @@ int mmc_erase(struct mmc_card *card, unsigned int from, unsigned int nr,
return -EINVAL;
}
if (arg == MMC_ERASE_ARG) {
rem = from % card->erase_size;
if (rem) {
rem = card->erase_size - rem;
from += rem;
if (nr > rem)
nr -= rem;
else
return 0;
}
rem = nr % card->erase_size;
if (rem)
nr -= rem;
}
if (arg == MMC_ERASE_ARG)
nr = mmc_align_erase_size(card, &from, &to, nr);
if (nr == 0)
return 0;
to = from + nr;
if (to <= from)
return -EINVAL;
......@@ -2352,6 +2472,8 @@ static unsigned int mmc_do_calc_max_discard(struct mmc_card *card,
struct mmc_host *host = card->host;
unsigned int max_discard, x, y, qty = 0, max_qty, min_qty, timeout;
unsigned int last_timeout = 0;
unsigned int max_busy_timeout = host->max_busy_timeout ?
host->max_busy_timeout : MMC_ERASE_TIMEOUT_MS;
if (card->erase_shift) {
max_qty = UINT_MAX >> card->erase_shift;
......@@ -2374,15 +2496,15 @@ static unsigned int mmc_do_calc_max_discard(struct mmc_card *card,
* matter what size of 'host->max_busy_timeout', but if the
* 'host->max_busy_timeout' is large enough for more discard sectors,
* then we can continue to increase the max discard sectors until we
* get a balance value.
* get a balance value. In cases when the 'host->max_busy_timeout'
* isn't specified, use the default max erase timeout.
*/
do {
y = 0;
for (x = 1; x && x <= max_qty && max_qty - x >= qty; x <<= 1) {
timeout = mmc_erase_timeout(card, arg, qty + x);
if (qty + x > min_qty &&
timeout > host->max_busy_timeout)
if (qty + x > min_qty && timeout > max_busy_timeout)
break;
if (timeout < last_timeout)
......@@ -2427,9 +2549,6 @@ unsigned int mmc_calc_max_discard(struct mmc_card *card)
struct mmc_host *host = card->host;
unsigned int max_discard, max_trim;
if (!host->max_busy_timeout)
return UINT_MAX;
/*
* Without erase_group_def set, MMC erase timeout depends on clock
* frequence which can change. In that case, the best choice is
......@@ -2447,7 +2566,8 @@ unsigned int mmc_calc_max_discard(struct mmc_card *card)
max_discard = 0;
}
pr_debug("%s: calculated max. discard sectors %u for timeout %u ms\n",
mmc_hostname(host), max_discard, host->max_busy_timeout);
mmc_hostname(host), max_discard, host->max_busy_timeout ?
host->max_busy_timeout : MMC_ERASE_TIMEOUT_MS);
return max_discard;
}
EXPORT_SYMBOL(mmc_calc_max_discard);
......@@ -2456,7 +2576,8 @@ int mmc_set_blocklen(struct mmc_card *card, unsigned int blocklen)
{
struct mmc_command cmd = {0};
if (mmc_card_blockaddr(card) || mmc_card_ddr52(card))
if (mmc_card_blockaddr(card) || mmc_card_ddr52(card) ||
mmc_card_hs400(card) || mmc_card_hs400es(card))
return 0;
cmd.opcode = MMC_SET_BLOCKLEN;
......
......@@ -1029,6 +1029,10 @@ static int mmc_select_hs(struct mmc_card *card)
err = mmc_switch_status(card);
}
if (err)
pr_warn("%s: switch to high-speed failed, err:%d\n",
mmc_hostname(card->host), err);
return err;
}
......@@ -1265,11 +1269,8 @@ static int mmc_select_hs400es(struct mmc_card *card)
/* Switch card to HS mode */
err = mmc_select_hs(card);
if (err) {
pr_err("%s: switch to high-speed failed, err:%d\n",
mmc_hostname(host), err);
if (err)
goto out_err;
}
err = mmc_switch_status(card);
if (err)
......
......@@ -16,6 +16,8 @@
#include <linux/device.h>
#include <linux/err.h>
#include <linux/gpio/consumer.h>
#include <linux/delay.h>
#include <linux/property.h>
#include <linux/mmc/host.h>
......@@ -24,6 +26,7 @@
struct mmc_pwrseq_simple {
struct mmc_pwrseq pwrseq;
bool clk_enabled;
u32 post_power_on_delay_ms;
struct clk *ext_clk;
struct gpio_descs *reset_gpios;
};
......@@ -64,6 +67,9 @@ static void mmc_pwrseq_simple_post_power_on(struct mmc_host *host)
struct mmc_pwrseq_simple *pwrseq = to_pwrseq_simple(host->pwrseq);
mmc_pwrseq_simple_set_gpios_value(pwrseq, 0);
if (pwrseq->post_power_on_delay_ms)
msleep(pwrseq->post_power_on_delay_ms);
}
static void mmc_pwrseq_simple_power_off(struct mmc_host *host)
......@@ -111,6 +117,9 @@ static int mmc_pwrseq_simple_probe(struct platform_device *pdev)
return PTR_ERR(pwrseq->reset_gpios);
}
device_property_read_u32(dev, "post-power-on-delay-ms",
&pwrseq->post_power_on_delay_ms);
pwrseq->pwrseq.dev = dev;
pwrseq->pwrseq.ops = &mmc_pwrseq_simple_ops;
pwrseq->pwrseq.owner = THIS_MODULE;
......
......@@ -223,8 +223,7 @@ static int mmc_decode_scr(struct mmc_card *card)
static int mmc_read_ssr(struct mmc_card *card)
{
unsigned int au, es, et, eo;
int err, i;
u32 *ssr;
int i;
if (!(card->csd.cmdclass & CCC_APP_SPEC)) {
pr_warn("%s: card lacks mandatory SD Status function\n",
......@@ -232,33 +231,27 @@ static int mmc_read_ssr(struct mmc_card *card)
return 0;
}
ssr = kmalloc(64, GFP_KERNEL);
if (!ssr)
return -ENOMEM;
err = mmc_app_sd_status(card, ssr);
if (err) {
if (mmc_app_sd_status(card, card->raw_ssr)) {
pr_warn("%s: problem reading SD Status register\n",
mmc_hostname(card->host));
err = 0;
goto out;
return 0;
}
for (i = 0; i < 16; i++)
ssr[i] = be32_to_cpu(ssr[i]);
card->raw_ssr[i] = be32_to_cpu(card->raw_ssr[i]);
/*
* UNSTUFF_BITS only works with four u32s so we have to offset the
* bitfield positions accordingly.
*/
au = UNSTUFF_BITS(ssr, 428 - 384, 4);
au = UNSTUFF_BITS(card->raw_ssr, 428 - 384, 4);
if (au) {
if (au <= 9 || card->scr.sda_spec3) {
card->ssr.au = sd_au_size[au];
es = UNSTUFF_BITS(ssr, 408 - 384, 16);
et = UNSTUFF_BITS(ssr, 402 - 384, 6);
es = UNSTUFF_BITS(card->raw_ssr, 408 - 384, 16);
et = UNSTUFF_BITS(card->raw_ssr, 402 - 384, 6);
if (es && et) {
eo = UNSTUFF_BITS(ssr, 400 - 384, 2);
eo = UNSTUFF_BITS(card->raw_ssr, 400 - 384, 2);
card->ssr.erase_timeout = (et * 1000) / es;
card->ssr.erase_offset = eo * 1000;
}
......@@ -267,9 +260,8 @@ static int mmc_read_ssr(struct mmc_card *card)
mmc_hostname(card->host));
}
}
out:
kfree(ssr);
return err;
return 0;
}
/*
......@@ -666,6 +658,14 @@ MMC_DEV_ATTR(cid, "%08x%08x%08x%08x\n", card->raw_cid[0], card->raw_cid[1],
MMC_DEV_ATTR(csd, "%08x%08x%08x%08x\n", card->raw_csd[0], card->raw_csd[1],
card->raw_csd[2], card->raw_csd[3]);
MMC_DEV_ATTR(scr, "%08x%08x\n", card->raw_scr[0], card->raw_scr[1]);
MMC_DEV_ATTR(ssr,
"%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x\n",
card->raw_ssr[0], card->raw_ssr[1], card->raw_ssr[2],
card->raw_ssr[3], card->raw_ssr[4], card->raw_ssr[5],
card->raw_ssr[6], card->raw_ssr[7], card->raw_ssr[8],
card->raw_ssr[9], card->raw_ssr[10], card->raw_ssr[11],
card->raw_ssr[12], card->raw_ssr[13], card->raw_ssr[14],
card->raw_ssr[15]);
MMC_DEV_ATTR(date, "%02d/%04d\n", card->cid.month, card->cid.year);
MMC_DEV_ATTR(erase_size, "%u\n", card->erase_size << 9);
MMC_DEV_ATTR(preferred_erase_size, "%u\n", card->pref_erase << 9);
......@@ -698,6 +698,7 @@ static struct attribute *sd_std_attrs[] = {
&dev_attr_cid.attr,
&dev_attr_csd.attr,
&dev_attr_scr.attr,
&dev_attr_ssr.attr,
&dev_attr_date.attr,
&dev_attr_erase_size.attr,
&dev_attr_preferred_erase_size.attr,
......
......@@ -26,8 +26,8 @@
*/
void sdio_claim_host(struct sdio_func *func)
{
BUG_ON(!func);
BUG_ON(!func->card);
if (WARN_ON(!func))
return;
mmc_claim_host(func->card->host);
}
......@@ -42,8 +42,8 @@ EXPORT_SYMBOL_GPL(sdio_claim_host);
*/
void sdio_release_host(struct sdio_func *func)
{
BUG_ON(!func);
BUG_ON(!func->card);
if (WARN_ON(!func))
return;
mmc_release_host(func->card->host);
}
......@@ -62,8 +62,8 @@ int sdio_enable_func(struct sdio_func *func)
unsigned char reg;
unsigned long timeout;
BUG_ON(!func);
BUG_ON(!func->card);
if (!func)
return -EINVAL;
pr_debug("SDIO: Enabling device %s...\n", sdio_func_id(func));
......@@ -112,8 +112,8 @@ int sdio_disable_func(struct sdio_func *func)
int ret;
unsigned char reg;
BUG_ON(!func);
BUG_ON(!func->card);
if (!func)
return -EINVAL;
pr_debug("SDIO: Disabling device %s...\n", sdio_func_id(func));
......@@ -307,6 +307,9 @@ static int sdio_io_rw_ext_helper(struct sdio_func *func, int write,
unsigned max_blocks;
int ret;
if (!func || (func->num > 7))
return -EINVAL;
/* Do the bulk of the transfer using block mode (if supported). */
if (func->card->cccr.multi_block && (size > sdio_max_byte_size(func))) {
/* Blocks per command is limited by host count, host transfer
......@@ -367,7 +370,10 @@ u8 sdio_readb(struct sdio_func *func, unsigned int addr, int *err_ret)
int ret;
u8 val;
BUG_ON(!func);
if (!func) {
*err_ret = -EINVAL;
return 0xFF;
}
if (err_ret)
*err_ret = 0;
......@@ -398,7 +404,10 @@ void sdio_writeb(struct sdio_func *func, u8 b, unsigned int addr, int *err_ret)
{
int ret;
BUG_ON(!func);
if (!func) {
*err_ret = -EINVAL;
return;
}
ret = mmc_io_rw_direct(func->card, 1, func->num, addr, b, NULL);
if (err_ret)
......@@ -623,7 +632,10 @@ unsigned char sdio_f0_readb(struct sdio_func *func, unsigned int addr,
int ret;
unsigned char val;
BUG_ON(!func);
if (!func) {
*err_ret = -EINVAL;
return 0xFF;
}
if (err_ret)
*err_ret = 0;
......@@ -658,7 +670,10 @@ void sdio_f0_writeb(struct sdio_func *func, unsigned char b, unsigned int addr,
{
int ret;
BUG_ON(!func);
if (!func) {
*err_ret = -EINVAL;
return;
}
if ((addr < 0xF0 || addr > 0xFF) && (!mmc_card_lenient_fn0(func->card))) {
if (err_ret)
......@@ -684,8 +699,8 @@ EXPORT_SYMBOL_GPL(sdio_f0_writeb);
*/
mmc_pm_flag_t sdio_get_host_pm_caps(struct sdio_func *func)
{
BUG_ON(!func);
BUG_ON(!func->card);
if (!func)
return 0;
return func->card->host->pm_caps;
}
......@@ -707,8 +722,8 @@ int sdio_set_host_pm_flags(struct sdio_func *func, mmc_pm_flag_t flags)
{
struct mmc_host *host;
BUG_ON(!func);
BUG_ON(!func->card);
if (!func)
return -EINVAL;
host = func->card->host;
......
......@@ -24,8 +24,6 @@ int mmc_send_io_op_cond(struct mmc_host *host, u32 ocr, u32 *rocr)
struct mmc_command cmd = {0};
int i, err = 0;
BUG_ON(!host);
cmd.opcode = SD_IO_SEND_OP_COND;
cmd.arg = ocr;
cmd.flags = MMC_RSP_SPI_R4 | MMC_RSP_R4 | MMC_CMD_BCR;
......@@ -71,8 +69,8 @@ static int mmc_io_rw_direct_host(struct mmc_host *host, int write, unsigned fn,
struct mmc_command cmd = {0};
int err;
BUG_ON(!host);
BUG_ON(fn > 7);
if (fn > 7)
return -EINVAL;
/* sanity check */
if (addr & ~0x1FFFF)
......@@ -114,7 +112,6 @@ static int mmc_io_rw_direct_host(struct mmc_host *host, int write, unsigned fn,
int mmc_io_rw_direct(struct mmc_card *card, int write, unsigned fn,
unsigned addr, u8 in, u8 *out)
{
BUG_ON(!card);
return mmc_io_rw_direct_host(card->host, write, fn, addr, in, out);
}
......@@ -129,8 +126,6 @@ int mmc_io_rw_extended(struct mmc_card *card, int write, unsigned fn,
unsigned int nents, left_size, i;
unsigned int seg_size = card->host->max_seg_size;
BUG_ON(!card);
BUG_ON(fn > 7);
WARN_ON(blksz == 0);
/* sanity check */
......
......@@ -1216,9 +1216,11 @@ static int __init davinci_mmcsd_probe(struct platform_device *pdev)
}
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
irq = platform_get_irq(pdev, 0);
if (!r || irq == NO_IRQ)
if (!r)
return -ENODEV;
irq = platform_get_irq(pdev, 0);
if (irq < 0)
return irq;
mem_size = resource_size(r);
mem = devm_request_mem_region(&pdev->dev, r->start, mem_size,
......
......@@ -225,8 +225,12 @@ static void dw_mci_exynos_config_hs400(struct dw_mci *host, u32 timing)
* Not supported to configure register
* related to HS400
*/
if (priv->ctrl_type < DW_MCI_TYPE_EXYNOS5420)
if (priv->ctrl_type < DW_MCI_TYPE_EXYNOS5420) {
if (timing == MMC_TIMING_MMC_HS400)
dev_warn(host->dev,
"cannot configure HS400, unsupported chipset\n");
return;
}
dqs = priv->saved_dqs_en;
strobe = priv->saved_strobe_ctrl;
......
......@@ -131,11 +131,17 @@ static void dw_mci_hi6220_set_ios(struct dw_mci *host, struct mmc_ios *ios)
host->bus_hz = clk_get_rate(host->biu_clk);
}
static int dw_mci_hi6220_execute_tuning(struct dw_mci_slot *slot, u32 opcode)
{
return 0;
}
static const struct dw_mci_drv_data hi6220_data = {
.caps = dw_mci_hi6220_caps,
.switch_voltage = dw_mci_hi6220_switch_voltage,
.set_ios = dw_mci_hi6220_set_ios,
.parse_dt = dw_mci_hi6220_parse_dt,
.execute_tuning = dw_mci_hi6220_execute_tuning,
};
static const struct of_device_id dw_mci_k3_match[] = {
......
This diff is collapsed.
......@@ -257,7 +257,7 @@ static void moxart_dma_complete(void *param)
static void moxart_transfer_dma(struct mmc_data *data, struct moxart_host *host)
{
u32 len, dir_data, dir_slave;
unsigned long dma_time;
long dma_time;
struct dma_async_tx_descriptor *desc = NULL;
struct dma_chan *dma_chan;
......@@ -397,7 +397,8 @@ static void moxart_prepare_data(struct moxart_host *host)
static void moxart_request(struct mmc_host *mmc, struct mmc_request *mrq)
{
struct moxart_host *host = mmc_priv(mmc);
unsigned long pio_time, flags;
long pio_time;
unsigned long flags;
u32 status;
spin_lock_irqsave(&host->lock, flags);
......
......@@ -126,7 +126,7 @@ static int sd_response_type(struct mmc_command *cmd)
return SD_RSP_TYPE_R0;
case MMC_RSP_R1:
return SD_RSP_TYPE_R1;
case MMC_RSP_R1 & ~MMC_RSP_CRC:
case MMC_RSP_R1_NO_CRC:
return SD_RSP_TYPE_R1 | SD_NO_CHECK_CRC7;
case MMC_RSP_R1B:
return SD_RSP_TYPE_R1b;
......
......@@ -324,7 +324,7 @@ static void sd_send_cmd_get_rsp(struct rtsx_usb_sdmmc *host,
case MMC_RSP_R1:
rsp_type = SD_RSP_TYPE_R1;
break;
case MMC_RSP_R1 & ~MMC_RSP_CRC:
case MMC_RSP_R1_NO_CRC:
rsp_type = SD_RSP_TYPE_R1 | SD_NO_CHECK_CRC7;
break;
case MMC_RSP_R1B:
......
......@@ -275,7 +275,7 @@ static const struct sdhci_acpi_slot sdhci_acpi_slot_int_emmc = {
.chip = &sdhci_acpi_chip_int,
.caps = MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE |
MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR |
MMC_CAP_WAIT_WHILE_BUSY,
MMC_CAP_CMD_DURING_TFR | MMC_CAP_WAIT_WHILE_BUSY,
.caps2 = MMC_CAP2_HC_ERASE_SZ,
.flags = SDHCI_ACPI_RUNTIME_PM,
.quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC,
......
......@@ -253,12 +253,14 @@ static int sdhci_bcm_kona_probe(struct platform_device *pdev)
goto err_pltfm_free;
}
if (clk_set_rate(pltfm_priv->clk, host->mmc->f_max) != 0) {
ret = clk_set_rate(pltfm_priv->clk, host->mmc->f_max);
if (ret) {
dev_err(dev, "Failed to set rate core clock\n");
goto err_pltfm_free;
}
if (clk_prepare_enable(pltfm_priv->clk) != 0) {
ret = clk_prepare_enable(pltfm_priv->clk);
if (ret) {
dev_err(dev, "Failed to enable core clock\n");
goto err_pltfm_free;
}
......
......@@ -98,6 +98,8 @@ static int sdhci_brcmstb_probe(struct platform_device *pdev)
* properties through mmc_of_parse().
*/
host->caps = sdhci_readl(host, SDHCI_CAPABILITIES);
if (of_device_is_compatible(pdev->dev.of_node, "brcm,bcm7425-sdhci"))
host->caps &= ~SDHCI_CAN_64BIT;
host->caps1 = sdhci_readl(host, SDHCI_CAPABILITIES_1);
host->caps1 &= ~(SDHCI_SUPPORT_SDR50 | SDHCI_SUPPORT_SDR104 |
SDHCI_SUPPORT_DDR50);
......@@ -121,6 +123,7 @@ static int sdhci_brcmstb_probe(struct platform_device *pdev)
static const struct of_device_id sdhci_brcm_of_match[] = {
{ .compatible = "brcm,bcm7425-sdhci" },
{ .compatible = "brcm,bcm7445-sdhci" },
{},
};
MODULE_DEVICE_TABLE(of, sdhci_brcm_of_match);
......@@ -128,7 +131,6 @@ MODULE_DEVICE_TABLE(of, sdhci_brcm_of_match);
static struct platform_driver sdhci_brcmstb_driver = {
.driver = {
.name = "sdhci-brcmstb",
.owner = THIS_MODULE,
.pm = &sdhci_brcmstb_pmops,
.of_match_table = of_match_ptr(sdhci_brcm_of_match),
},
......
......@@ -31,6 +31,7 @@
#include "sdhci-pltfm.h"
#include "sdhci-esdhc.h"
#define ESDHC_SYS_CTRL_DTOCV_MASK 0x0f
#define ESDHC_CTRL_D3CD 0x08
#define ESDHC_BURST_LEN_EN_INCR (1 << 27)
/* VENDOR SPEC register */
......@@ -928,7 +929,8 @@ static unsigned int esdhc_get_max_timeout_count(struct sdhci_host *host)
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct pltfm_imx_data *imx_data = sdhci_pltfm_priv(pltfm_host);
return esdhc_is_usdhc(imx_data) ? 1 << 28 : 1 << 27;
/* Doc Errata: the uSDHC actual maximum timeout count is 1 << 29 */
return esdhc_is_usdhc(imx_data) ? 1 << 29 : 1 << 27;
}
static void esdhc_set_timeout(struct sdhci_host *host, struct mmc_command *cmd)
......@@ -937,7 +939,8 @@ static void esdhc_set_timeout(struct sdhci_host *host, struct mmc_command *cmd)
struct pltfm_imx_data *imx_data = sdhci_pltfm_priv(pltfm_host);
/* use maximum timeout counter */
sdhci_writeb(host, esdhc_is_usdhc(imx_data) ? 0xF : 0xE,
esdhc_clrset_le(host, ESDHC_SYS_CTRL_DTOCV_MASK,
esdhc_is_usdhc(imx_data) ? 0xF : 0xE,
SDHCI_TIMEOUT_CONTROL);
}
......
......@@ -26,6 +26,7 @@
#include <linux/phy/phy.h>
#include <linux/regmap.h>
#include "sdhci-pltfm.h"
#include <linux/of.h>
#define SDHCI_ARASAN_CLK_CTRL_OFFSET 0x2c
#define SDHCI_ARASAN_VENDOR_REGISTER 0x78
......@@ -35,6 +36,8 @@
#define CLK_CTRL_TIMEOUT_MASK (0xf << CLK_CTRL_TIMEOUT_SHIFT)
#define CLK_CTRL_TIMEOUT_MIN_EXP 13
#define PHY_CLK_TOO_SLOW_HZ 400000
/*
* On some SoCs the syscon area has a feature where the upper 16-bits of
* each 32-bit register act as a write mask for the lower 16-bits. This allows
......@@ -65,10 +68,12 @@ struct sdhci_arasan_soc_ctl_field {
* accessible via the syscon API.
*
* @baseclkfreq: Where to find corecfg_baseclkfreq
* @clockmultiplier: Where to find corecfg_clockmultiplier
* @hiword_update: If true, use HIWORD_UPDATE to access the syscon
*/
struct sdhci_arasan_soc_ctl_map {
struct sdhci_arasan_soc_ctl_field baseclkfreq;
struct sdhci_arasan_soc_ctl_field clockmultiplier;
bool hiword_update;
};
......@@ -77,6 +82,7 @@ struct sdhci_arasan_soc_ctl_map {
* @host: Pointer to the main SDHCI host structure.
* @clk_ahb: Pointer to the AHB clock
* @phy: Pointer to the generic phy
* @is_phy_on: True if the PHY is on; false if not.
* @sdcardclk_hw: Struct for the clock we might provide to a PHY.
* @sdcardclk: Pointer to normal 'struct clock' for sdcardclk_hw.
* @soc_ctl_base: Pointer to regmap for syscon for soc_ctl registers.
......@@ -86,16 +92,22 @@ struct sdhci_arasan_data {
struct sdhci_host *host;
struct clk *clk_ahb;
struct phy *phy;
bool is_phy_on;
struct clk_hw sdcardclk_hw;
struct clk *sdcardclk;
struct regmap *soc_ctl_base;
const struct sdhci_arasan_soc_ctl_map *soc_ctl_map;
unsigned int quirks; /* Arasan deviations from spec */
/* Controller does not have CD wired and will not function normally without */
#define SDHCI_ARASAN_QUIRK_FORCE_CDTEST BIT(0)
};
static const struct sdhci_arasan_soc_ctl_map rk3399_soc_ctl_map = {
.baseclkfreq = { .reg = 0xf000, .width = 8, .shift = 8 },
.clockmultiplier = { .reg = 0xf02c, .width = 8, .shift = 0},
.hiword_update = true,
};
......@@ -170,13 +182,47 @@ static void sdhci_arasan_set_clock(struct sdhci_host *host, unsigned int clock)
struct sdhci_arasan_data *sdhci_arasan = sdhci_pltfm_priv(pltfm_host);
bool ctrl_phy = false;
if (clock > MMC_HIGH_52_MAX_DTR && (!IS_ERR(sdhci_arasan->phy)))
ctrl_phy = true;
if (!IS_ERR(sdhci_arasan->phy)) {
if (!sdhci_arasan->is_phy_on && clock <= PHY_CLK_TOO_SLOW_HZ) {
/*
* If PHY off, set clock to max speed and power PHY on.
*
* Although PHY docs apparently suggest power cycling
* when changing the clock the PHY doesn't like to be
* powered on while at low speeds like those used in ID
* mode. Even worse is powering the PHY on while the
* clock is off.
*
* To workaround the PHY limitations, the best we can
* do is to power it on at a faster speed and then slam
* through low speeds without power cycling.
*/
sdhci_set_clock(host, host->max_clk);
spin_unlock_irq(&host->lock);
phy_power_on(sdhci_arasan->phy);
spin_lock_irq(&host->lock);
sdhci_arasan->is_phy_on = true;
/*
* We'll now fall through to the below case with
* ctrl_phy = false (so we won't turn off/on). The
* sdhci_set_clock() will set the real clock.
*/
} else if (clock > PHY_CLK_TOO_SLOW_HZ) {
/*
* At higher clock speeds the PHY is fine being power
* cycled and docs say you _should_ power cycle when
* changing clock speeds.
*/
ctrl_phy = true;
}
}
if (ctrl_phy) {
if (ctrl_phy && sdhci_arasan->is_phy_on) {
spin_unlock_irq(&host->lock);
phy_power_off(sdhci_arasan->phy);
spin_lock_irq(&host->lock);
sdhci_arasan->is_phy_on = false;
}
sdhci_set_clock(host, clock);
......@@ -185,6 +231,7 @@ static void sdhci_arasan_set_clock(struct sdhci_host *host, unsigned int clock)
spin_unlock_irq(&host->lock);
phy_power_on(sdhci_arasan->phy);
spin_lock_irq(&host->lock);
sdhci_arasan->is_phy_on = true;
}
}
......@@ -203,12 +250,27 @@ static void sdhci_arasan_hs400_enhanced_strobe(struct mmc_host *mmc,
writel(vendor, host->ioaddr + SDHCI_ARASAN_VENDOR_REGISTER);
}
void sdhci_arasan_reset(struct sdhci_host *host, u8 mask)
{
u8 ctrl;
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_arasan_data *sdhci_arasan = sdhci_pltfm_priv(pltfm_host);
sdhci_reset(host, mask);
if (sdhci_arasan->quirks & SDHCI_ARASAN_QUIRK_FORCE_CDTEST) {
ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
ctrl |= SDHCI_CTRL_CDTEST_INS | SDHCI_CTRL_CDTEST_EN;
sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL);
}
}
static struct sdhci_ops sdhci_arasan_ops = {
.set_clock = sdhci_arasan_set_clock,
.get_max_clock = sdhci_pltfm_clk_get_max_clock,
.get_timeout_clock = sdhci_arasan_get_timeout_clock,
.set_bus_width = sdhci_set_bus_width,
.reset = sdhci_reset,
.reset = sdhci_arasan_reset,
.set_uhs_signaling = sdhci_set_uhs_signaling,
};
......@@ -239,13 +301,14 @@ static int sdhci_arasan_suspend(struct device *dev)
if (ret)
return ret;
if (!IS_ERR(sdhci_arasan->phy)) {
if (!IS_ERR(sdhci_arasan->phy) && sdhci_arasan->is_phy_on) {
ret = phy_power_off(sdhci_arasan->phy);
if (ret) {
dev_err(dev, "Cannot power off phy.\n");
sdhci_resume_host(host);
return ret;
}
sdhci_arasan->is_phy_on = false;
}
clk_disable(pltfm_host->clk);
......@@ -281,12 +344,13 @@ static int sdhci_arasan_resume(struct device *dev)
return ret;
}
if (!IS_ERR(sdhci_arasan->phy)) {
if (!IS_ERR(sdhci_arasan->phy) && host->mmc->actual_clock) {
ret = phy_power_on(sdhci_arasan->phy);
if (ret) {
dev_err(dev, "Cannot power on phy.\n");
return ret;
}
sdhci_arasan->is_phy_on = true;
}
return sdhci_resume_host(host);
......@@ -337,6 +401,45 @@ static const struct clk_ops arasan_sdcardclk_ops = {
.recalc_rate = sdhci_arasan_sdcardclk_recalc_rate,
};
/**
* sdhci_arasan_update_clockmultiplier - Set corecfg_clockmultiplier
*
* The corecfg_clockmultiplier is supposed to contain clock multiplier
* value of programmable clock generator.
*
* NOTES:
* - Many existing devices don't seem to do this and work fine. To keep
* compatibility for old hardware where the device tree doesn't provide a
* register map, this function is a noop if a soc_ctl_map hasn't been provided
* for this platform.
* - The value of corecfg_clockmultiplier should sync with that of corresponding
* value reading from sdhci_capability_register. So this function is called
* once at probe time and never called again.
*
* @host: The sdhci_host
*/
static void sdhci_arasan_update_clockmultiplier(struct sdhci_host *host,
u32 value)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_arasan_data *sdhci_arasan = sdhci_pltfm_priv(pltfm_host);
const struct sdhci_arasan_soc_ctl_map *soc_ctl_map =
sdhci_arasan->soc_ctl_map;
/* Having a map is optional */
if (!soc_ctl_map)
return;
/* If we have a map, we expect to have a syscon */
if (!sdhci_arasan->soc_ctl_base) {
pr_warn("%s: Have regmap, but no soc-ctl-syscon\n",
mmc_hostname(host->mmc));
return;
}
sdhci_arasan_syscon_write(host, &soc_ctl_map->clockmultiplier, value);
}
/**
* sdhci_arasan_update_baseclkfreq - Set corecfg_baseclkfreq
*
......@@ -462,6 +565,7 @@ static int sdhci_arasan_probe(struct platform_device *pdev)
struct sdhci_host *host;
struct sdhci_pltfm_host *pltfm_host;
struct sdhci_arasan_data *sdhci_arasan;
struct device_node *np = pdev->dev.of_node;
host = sdhci_pltfm_init(pdev, &sdhci_arasan_pdata,
sizeof(*sdhci_arasan));
......@@ -516,8 +620,16 @@ static int sdhci_arasan_probe(struct platform_device *pdev)
}
sdhci_get_of_property(pdev);
if (of_property_read_bool(np, "xlnx,fails-without-test-cd"))
sdhci_arasan->quirks |= SDHCI_ARASAN_QUIRK_FORCE_CDTEST;
pltfm_host->clk = clk_xin;
if (of_device_is_compatible(pdev->dev.of_node,
"rockchip,rk3399-sdhci-5.1"))
sdhci_arasan_update_clockmultiplier(host, 0x0);
sdhci_arasan_update_baseclkfreq(host);
ret = sdhci_arasan_register_sdclk(sdhci_arasan, clk_xin, &pdev->dev);
......@@ -547,12 +659,6 @@ static int sdhci_arasan_probe(struct platform_device *pdev)
goto unreg_clk;
}
ret = phy_power_on(sdhci_arasan->phy);
if (ret < 0) {
dev_err(&pdev->dev, "phy_power_on err.\n");
goto err_phy_power;
}
host->mmc_host_ops.hs400_enhanced_strobe =
sdhci_arasan_hs400_enhanced_strobe;
}
......@@ -564,9 +670,6 @@ static int sdhci_arasan_probe(struct platform_device *pdev)
return 0;
err_add_host:
if (!IS_ERR(sdhci_arasan->phy))
phy_power_off(sdhci_arasan->phy);
err_phy_power:
if (!IS_ERR(sdhci_arasan->phy))
phy_exit(sdhci_arasan->phy);
unreg_clk:
......@@ -589,7 +692,8 @@ static int sdhci_arasan_remove(struct platform_device *pdev)
struct clk *clk_ahb = sdhci_arasan->clk_ahb;
if (!IS_ERR(sdhci_arasan->phy)) {
phy_power_off(sdhci_arasan->phy);
if (sdhci_arasan->is_phy_on)
phy_power_off(sdhci_arasan->phy);
phy_exit(sdhci_arasan->phy);
}
......
......@@ -583,7 +583,7 @@ static int sdhci_esdhc_probe(struct platform_device *pdev)
np = pdev->dev.of_node;
if (of_get_property(np, "little-endian", NULL))
if (of_property_read_bool(np, "little-endian"))
host = sdhci_pltfm_init(pdev, &sdhci_esdhc_le_pdata,
sizeof(struct sdhci_esdhc));
else
......
......@@ -156,7 +156,7 @@ static void sdhci_pci_add_own_cd(struct sdhci_pci_slot *slot)
if (!gpio_is_valid(gpio))
return;
err = gpio_request(gpio, "sd_cd");
err = devm_gpio_request(&slot->chip->pdev->dev, gpio, "sd_cd");
if (err < 0)
goto out;
......@@ -179,7 +179,7 @@ static void sdhci_pci_add_own_cd(struct sdhci_pci_slot *slot)
return;
out_free:
gpio_free(gpio);
devm_gpio_free(&slot->chip->pdev->dev, gpio);
out:
dev_warn(&slot->chip->pdev->dev, "failed to setup card detect wake up\n");
}
......@@ -188,8 +188,6 @@ static void sdhci_pci_remove_own_cd(struct sdhci_pci_slot *slot)
{
if (slot->cd_irq >= 0)
free_irq(slot->cd_irq, slot);
if (gpio_is_valid(slot->cd_gpio))
gpio_free(slot->cd_gpio);
}
#else
......@@ -356,6 +354,7 @@ static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot)
{
slot->host->mmc->caps |= MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE |
MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR |
MMC_CAP_CMD_DURING_TFR |
MMC_CAP_WAIT_WHILE_BUSY;
slot->host->mmc->caps2 |= MMC_CAP2_HC_ERASE_SZ;
slot->hw_reset = sdhci_pci_int_hw_reset;
......@@ -421,17 +420,30 @@ static const struct sdhci_pci_fixes sdhci_intel_byt_sd = {
/* Define Host controllers for Intel Merrifield platform */
#define INTEL_MRFLD_EMMC_0 0
#define INTEL_MRFLD_EMMC_1 1
#define INTEL_MRFLD_SD 2
#define INTEL_MRFLD_SDIO 3
static int intel_mrfld_mmc_probe_slot(struct sdhci_pci_slot *slot)
{
if ((PCI_FUNC(slot->chip->pdev->devfn) != INTEL_MRFLD_EMMC_0) &&
(PCI_FUNC(slot->chip->pdev->devfn) != INTEL_MRFLD_EMMC_1))
/* SD support is not ready yet */
unsigned int func = PCI_FUNC(slot->chip->pdev->devfn);
switch (func) {
case INTEL_MRFLD_EMMC_0:
case INTEL_MRFLD_EMMC_1:
slot->host->mmc->caps |= MMC_CAP_NONREMOVABLE |
MMC_CAP_8_BIT_DATA |
MMC_CAP_1_8V_DDR;
break;
case INTEL_MRFLD_SD:
slot->host->quirks2 |= SDHCI_QUIRK2_NO_1_8_V;
break;
case INTEL_MRFLD_SDIO:
slot->host->mmc->caps |= MMC_CAP_NONREMOVABLE |
MMC_CAP_POWER_OFF_CARD;
break;
default:
return -ENODEV;
slot->host->mmc->caps |= MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE |
MMC_CAP_1_8V_DDR;
}
return 0;
}
......@@ -1615,7 +1627,6 @@ static struct sdhci_pci_slot *sdhci_pci_probe_slot(
slot->chip = chip;
slot->host = host;
slot->pci_bar = bar;
slot->rst_n_gpio = -EINVAL;
slot->cd_gpio = -EINVAL;
slot->cd_idx = -1;
......@@ -1643,27 +1654,22 @@ static struct sdhci_pci_slot *sdhci_pci_probe_slot(
host->irq = pdev->irq;
ret = pci_request_region(pdev, bar, mmc_hostname(host->mmc));
ret = pcim_iomap_regions(pdev, BIT(bar), mmc_hostname(host->mmc));
if (ret) {
dev_err(&pdev->dev, "cannot request region\n");
goto cleanup;
}
host->ioaddr = pci_ioremap_bar(pdev, bar);
if (!host->ioaddr) {
dev_err(&pdev->dev, "failed to remap registers\n");
ret = -ENOMEM;
goto release;
}
host->ioaddr = pcim_iomap_table(pdev)[bar];
if (chip->fixes && chip->fixes->probe_slot) {
ret = chip->fixes->probe_slot(slot);
if (ret)
goto unmap;
goto cleanup;
}
if (gpio_is_valid(slot->rst_n_gpio)) {
if (!gpio_request(slot->rst_n_gpio, "eMMC_reset")) {
if (!devm_gpio_request(&pdev->dev, slot->rst_n_gpio, "eMMC_reset")) {
gpio_direction_output(slot->rst_n_gpio, 1);
slot->host->mmc->caps |= MMC_CAP_HW_RESET;
slot->hw_reset = sdhci_pci_gpio_hw_reset;
......@@ -1702,18 +1708,9 @@ static struct sdhci_pci_slot *sdhci_pci_probe_slot(
return slot;
remove:
if (gpio_is_valid(slot->rst_n_gpio))
gpio_free(slot->rst_n_gpio);
if (chip->fixes && chip->fixes->remove_slot)
chip->fixes->remove_slot(slot, 0);
unmap:
iounmap(host->ioaddr);
release:
pci_release_region(pdev, bar);
cleanup:
if (slot->data && slot->data->cleanup)
slot->data->cleanup(slot->data);
......@@ -1738,17 +1735,12 @@ static void sdhci_pci_remove_slot(struct sdhci_pci_slot *slot)
sdhci_remove_host(slot->host, dead);
if (gpio_is_valid(slot->rst_n_gpio))
gpio_free(slot->rst_n_gpio);
if (slot->chip->fixes && slot->chip->fixes->remove_slot)
slot->chip->fixes->remove_slot(slot, dead);
if (slot->data && slot->data->cleanup)
slot->data->cleanup(slot->data);
pci_release_region(slot->chip->pdev, slot->pci_bar);
sdhci_free_host(slot->host);
}
......
......@@ -72,7 +72,6 @@ struct sdhci_pci_slot {
struct sdhci_host *host;
struct sdhci_pci_data *data;
int pci_bar;
int rst_n_gpio;
int cd_gpio;
int cd_irq;
......
......@@ -156,13 +156,6 @@ struct sdhci_host *sdhci_pltfm_init(struct platform_device *pdev,
host->quirks2 = pdata->quirks2;
}
/*
* Some platforms need to probe the controller to be able to
* determine which caps should be used.
*/
if (host->ops && host->ops->platform_init)
host->ops->platform_init(host);
platform_set_drvdata(pdev, host);
return host;
......
......@@ -391,6 +391,31 @@ static const struct sdhci_tegra_soc_data soc_data_tegra114 = {
.pdata = &sdhci_tegra114_pdata,
};
static const struct sdhci_pltfm_data sdhci_tegra124_pdata = {
.quirks = SDHCI_QUIRK_BROKEN_TIMEOUT_VAL |
SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK |
SDHCI_QUIRK_SINGLE_POWER_WRITE |
SDHCI_QUIRK_NO_HISPD_BIT |
SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC |
SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
/*
* The TRM states that the SD/MMC controller found on
* Tegra124 can address 34 bits (the maximum supported by
* the Tegra memory controller), but tests show that DMA
* to or from above 4 GiB doesn't work. This is possibly
* caused by missing programming, though it's not obvious
* what sequence is required. Mark 64-bit DMA broken for
* now to fix this for existing users (e.g. Nyan boards).
*/
SDHCI_QUIRK2_BROKEN_64_BIT_DMA,
.ops = &tegra114_sdhci_ops,
};
static const struct sdhci_tegra_soc_data soc_data_tegra124 = {
.pdata = &sdhci_tegra124_pdata,
};
static const struct sdhci_pltfm_data sdhci_tegra210_pdata = {
.quirks = SDHCI_QUIRK_BROKEN_TIMEOUT_VAL |
SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK |
......@@ -408,7 +433,7 @@ static const struct sdhci_tegra_soc_data soc_data_tegra210 = {
static const struct of_device_id sdhci_tegra_dt_match[] = {
{ .compatible = "nvidia,tegra210-sdhci", .data = &soc_data_tegra210 },
{ .compatible = "nvidia,tegra124-sdhci", .data = &soc_data_tegra114 },
{ .compatible = "nvidia,tegra124-sdhci", .data = &soc_data_tegra124 },
{ .compatible = "nvidia,tegra114-sdhci", .data = &soc_data_tegra114 },
{ .compatible = "nvidia,tegra30-sdhci", .data = &soc_data_tegra30 },
{ .compatible = "nvidia,tegra20-sdhci", .data = &soc_data_tegra20 },
......
......@@ -888,7 +888,8 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd)
static inline bool sdhci_auto_cmd12(struct sdhci_host *host,
struct mmc_request *mrq)
{
return !mrq->sbc && (host->flags & SDHCI_AUTO_CMD12);
return !mrq->sbc && (host->flags & SDHCI_AUTO_CMD12) &&
!mrq->cap_cmd_during_tfr;
}
static void sdhci_set_transfer_mode(struct sdhci_host *host,
......@@ -1031,9 +1032,18 @@ static void sdhci_finish_data(struct sdhci_host *host)
sdhci_do_reset(host, SDHCI_RESET_DATA);
}
/* Avoid triggering warning in sdhci_send_command() */
host->cmd = NULL;
sdhci_send_command(host, data->stop);
/*
* 'cap_cmd_during_tfr' request must not use the command line
* after mmc_command_done() has been called. It is upper layer's
* responsibility to send the stop command if required.
*/
if (data->mrq->cap_cmd_during_tfr) {
sdhci_finish_mrq(host, data->mrq);
} else {
/* Avoid triggering warning in sdhci_send_command() */
host->cmd = NULL;
sdhci_send_command(host, data->stop);
}
} else {
sdhci_finish_mrq(host, data->mrq);
}
......@@ -1165,6 +1175,9 @@ static void sdhci_finish_command(struct sdhci_host *host)
}
}
if (cmd->mrq->cap_cmd_during_tfr && cmd == cmd->mrq->cmd)
mmc_command_done(host->mmc, cmd->mrq);
/*
* The host can send and interrupt when the busy state has
* ended, allowing us to wait without wasting CPU cycles.
......@@ -2062,7 +2075,7 @@ static int sdhci_execute_tuning(struct mmc_host *mmc, u32 opcode)
spin_unlock_irqrestore(&host->lock, flags);
/* Wait for Buffer Read Ready interrupt */
wait_event_interruptible_timeout(host->buf_ready_int,
wait_event_timeout(host->buf_ready_int,
(host->tuning_done == 1),
msecs_to_jiffies(50));
spin_lock_irqsave(&host->lock, flags);
......
......@@ -84,6 +84,8 @@
#define SDHCI_CTRL_ADMA32 0x10
#define SDHCI_CTRL_ADMA64 0x18
#define SDHCI_CTRL_8BITBUS 0x20
#define SDHCI_CTRL_CDTEST_INS 0x40
#define SDHCI_CTRL_CDTEST_EN 0x80
#define SDHCI_POWER_CONTROL 0x29
#define SDHCI_POWER_ON 0x01
......@@ -555,7 +557,6 @@ struct sdhci_ops {
void (*set_uhs_signaling)(struct sdhci_host *host, unsigned int uhs);
void (*hw_reset)(struct sdhci_host *host);
void (*adma_workaround)(struct sdhci_host *host, u32 intmask);
void (*platform_init)(struct sdhci_host *host);
void (*card_event)(struct sdhci_host *host);
void (*voltage_switch)(struct sdhci_host *host);
int (*select_drive_strength)(struct sdhci_host *host,
......
......@@ -94,6 +94,7 @@ static const struct of_device_id sh_mobile_sdhi_of_match[] = {
{ .compatible = "renesas,sdhi-r8a7793", .data = &of_rcar_gen2_compatible, },
{ .compatible = "renesas,sdhi-r8a7794", .data = &of_rcar_gen2_compatible, },
{ .compatible = "renesas,sdhi-r8a7795", .data = &of_rcar_gen3_compatible, },
{ .compatible = "renesas,sdhi-r8a7796", .data = &of_rcar_gen3_compatible, },
{},
};
MODULE_DEVICE_TABLE(of, sh_mobile_sdhi_of_match);
......@@ -213,6 +214,13 @@ static void sh_mobile_sdhi_clk_disable(struct tmio_mmc_host *host)
clk_disable_unprepare(priv->clk);
}
static int sh_mobile_sdhi_card_busy(struct mmc_host *mmc)
{
struct tmio_mmc_host *host = mmc_priv(mmc);
return !(sd_ctrl_read16_and_16_as_32(host, CTL_STATUS) & TMIO_STAT_DAT0);
}
static int sh_mobile_sdhi_start_signal_voltage_switch(struct mmc_host *mmc,
struct mmc_ios *ios)
{
......@@ -369,7 +377,14 @@ static int sh_mobile_sdhi_probe(struct platform_device *pdev)
host->clk_update = sh_mobile_sdhi_clk_update;
host->clk_disable = sh_mobile_sdhi_clk_disable;
host->multi_io_quirk = sh_mobile_sdhi_multi_io_quirk;
host->start_signal_voltage_switch = sh_mobile_sdhi_start_signal_voltage_switch;
/* SDR speeds are only available on Gen2+ */
if (mmc_data->flags & TMIO_MMC_MIN_RCAR2) {
/* card_busy caused issues on r8a73a4 (pre-Gen2) CD-less SDHI */
host->card_busy = sh_mobile_sdhi_card_busy;
host->start_signal_voltage_switch =
sh_mobile_sdhi_start_signal_voltage_switch;
}
/* Orginally registers were 16 bit apart, could be 32 or 64 nowadays */
if (!host->bus_shift && resource_size(res) > 0x100) /* old way to determine the shift */
......
This diff is collapsed.
......@@ -79,6 +79,9 @@
#define CLK_CTL_DIV_MASK 0xff
#define CLK_CTL_SCLKEN BIT(8)
#define CARD_OPT_WIDTH8 BIT(13)
#define CARD_OPT_WIDTH BIT(15)
#define TMIO_BBS 512 /* Boot block size */
/* Definitions for values the CTRL_SDIO_STATUS register can take. */
......@@ -158,6 +161,7 @@ struct tmio_mmc_host {
void (*clk_disable)(struct tmio_mmc_host *host);
int (*multi_io_quirk)(struct mmc_card *card,
unsigned int direction, int blk_size);
int (*card_busy)(struct mmc_host *mmc);
int (*start_signal_voltage_switch)(struct mmc_host *mmc,
struct mmc_ios *ios);
};
......
......@@ -336,7 +336,9 @@ static int tmio_mmc_start_command(struct tmio_mmc_host *host, struct mmc_command
switch (mmc_resp_type(cmd)) {
case MMC_RSP_NONE: c |= RESP_NONE; break;
case MMC_RSP_R1: c |= RESP_R1; break;
case MMC_RSP_R1:
case MMC_RSP_R1_NO_CRC:
c |= RESP_R1; break;
case MMC_RSP_R1B: c |= RESP_R1B; break;
case MMC_RSP_R2: c |= RESP_R2; break;
case MMC_RSP_R3: c |= RESP_R3; break;
......@@ -730,12 +732,13 @@ static int tmio_mmc_start_data(struct tmio_mmc_host *host,
pr_debug("setup data transfer: blocksize %08x nr_blocks %d\n",
data->blksz, data->blocks);
/* Some hardware cannot perform 2 byte requests in 4 bit mode */
if (host->mmc->ios.bus_width == MMC_BUS_WIDTH_4) {
/* Some hardware cannot perform 2 byte requests in 4/8 bit mode */
if (host->mmc->ios.bus_width == MMC_BUS_WIDTH_4 ||
host->mmc->ios.bus_width == MMC_BUS_WIDTH_8) {
int blksz_2bytes = pdata->flags & TMIO_MMC_BLKSZ_2BYTES;
if (data->blksz < 2 || (data->blksz < 4 && !blksz_2bytes)) {
pr_err("%s: %d byte block unsupported in 4 bit mode\n",
pr_err("%s: %d byte block unsupported in 4/8 bit mode\n",
mmc_hostname(host->mmc), data->blksz);
return -EINVAL;
}
......@@ -857,14 +860,16 @@ static void tmio_mmc_power_off(struct tmio_mmc_host *host)
static void tmio_mmc_set_bus_width(struct tmio_mmc_host *host,
unsigned char bus_width)
{
switch (bus_width) {
case MMC_BUS_WIDTH_1:
sd_ctrl_write16(host, CTL_SD_MEM_CARD_OPT, 0x80e0);
break;
case MMC_BUS_WIDTH_4:
sd_ctrl_write16(host, CTL_SD_MEM_CARD_OPT, 0x00e0);
break;
}
u16 reg = sd_ctrl_read16(host, CTL_SD_MEM_CARD_OPT)
& ~(CARD_OPT_WIDTH | CARD_OPT_WIDTH8);
/* reg now applies to MMC_BUS_WIDTH_4 */
if (bus_width == MMC_BUS_WIDTH_1)
reg |= CARD_OPT_WIDTH;
else if (bus_width == MMC_BUS_WIDTH_8)
reg |= CARD_OPT_WIDTH8;
sd_ctrl_write16(host, CTL_SD_MEM_CARD_OPT, reg);
}
/* Set MMC clock / power.
......@@ -960,20 +965,12 @@ static int tmio_multi_io_quirk(struct mmc_card *card,
return blk_size;
}
static int tmio_mmc_card_busy(struct mmc_host *mmc)
{
struct tmio_mmc_host *host = mmc_priv(mmc);
return !(sd_ctrl_read16_and_16_as_32(host, CTL_STATUS) & TMIO_STAT_DAT0);
}
static struct mmc_host_ops tmio_mmc_ops = {
.request = tmio_mmc_request,
.set_ios = tmio_mmc_set_ios,
.get_ro = tmio_mmc_get_ro,
.get_cd = mmc_gpio_get_cd,
.enable_sdio_irq = tmio_mmc_enable_sdio_irq,
.card_busy = tmio_mmc_card_busy,
.multi_io_quirk = tmio_multi_io_quirk,
};
......@@ -1072,6 +1069,7 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host,
goto host_free;
}
tmio_mmc_ops.card_busy = _host->card_busy;
tmio_mmc_ops.start_signal_voltage_switch = _host->start_signal_voltage_switch;
mmc->ops = &tmio_mmc_ops;
......@@ -1089,6 +1087,15 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host,
!mmc_card_is_removable(mmc) ||
mmc->slot.cd_irq >= 0);
/*
* On Gen2+, eMMC with NONREMOVABLE currently fails because native
* hotplug gets disabled. It seems RuntimePM related yet we need further
* research. Since we are planning a PM overhaul anyway, let's enforce
* for now the device being active by enabling native hotplug always.
*/
if (pdata->flags & TMIO_MMC_MIN_RCAR2)
_host->native_hotplug = true;
if (tmio_mmc_clk_enable(_host) < 0) {
mmc->f_max = pdata->hclk;
mmc->f_min = mmc->f_max / 512;
......
......@@ -292,6 +292,7 @@ struct mmc_card {
u32 raw_cid[4]; /* raw card CID */
u32 raw_csd[4]; /* raw card CSD */
u32 raw_scr[2]; /* raw card SCR */
u32 raw_ssr[16]; /* raw card SSR */
struct mmc_cid cid; /* card identification */
struct mmc_csd csd; /* card specific */
struct mmc_ext_csd ext_csd; /* mmc v4 extended card specific */
......
......@@ -55,6 +55,9 @@ struct mmc_command {
#define MMC_RSP_R6 (MMC_RSP_PRESENT|MMC_RSP_CRC|MMC_RSP_OPCODE)
#define MMC_RSP_R7 (MMC_RSP_PRESENT|MMC_RSP_CRC|MMC_RSP_OPCODE)
/* Can be used by core to poll after switch to MMC HS mode */
#define MMC_RSP_R1_NO_CRC (MMC_RSP_PRESENT|MMC_RSP_OPCODE)
#define mmc_resp_type(cmd) ((cmd)->flags & (MMC_RSP_PRESENT|MMC_RSP_136|MMC_RSP_CRC|MMC_RSP_BUSY|MMC_RSP_OPCODE))
/*
......@@ -133,8 +136,12 @@ struct mmc_request {
struct mmc_command *stop;
struct completion completion;
struct completion cmd_completion;
void (*done)(struct mmc_request *);/* completion function */
struct mmc_host *host;
/* Allow other commands during this ongoing data transfer or busy wait */
bool cap_cmd_during_tfr;
};
struct mmc_card;
......@@ -146,6 +153,9 @@ extern struct mmc_async_req *mmc_start_req(struct mmc_host *,
struct mmc_async_req *, int *);
extern int mmc_interrupt_hpi(struct mmc_card *);
extern void mmc_wait_for_req(struct mmc_host *, struct mmc_request *);
extern void mmc_wait_for_req_done(struct mmc_host *host,
struct mmc_request *mrq);
extern bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq);
extern int mmc_wait_for_cmd(struct mmc_host *, struct mmc_command *, int);
extern int mmc_app_cmd(struct mmc_host *, struct mmc_card *);
extern int mmc_wait_for_app_cmd(struct mmc_host *, struct mmc_card *,
......
......@@ -17,6 +17,7 @@
#include <linux/scatterlist.h>
#include <linux/mmc/core.h>
#include <linux/dmaengine.h>
#include <linux/reset.h>
#define MAX_MCI_SLOTS 2
......@@ -259,6 +260,7 @@ struct dw_mci_board {
/* delay in mS before detecting cards after interrupt */
u32 detect_delay_ms;
struct reset_control *rstc;
struct dw_mci_dma_ops *dma_ops;
struct dma_pdata *data;
};
......
......@@ -281,6 +281,7 @@ struct mmc_host {
#define MMC_CAP_DRIVER_TYPE_A (1 << 23) /* Host supports Driver Type A */
#define MMC_CAP_DRIVER_TYPE_C (1 << 24) /* Host supports Driver Type C */
#define MMC_CAP_DRIVER_TYPE_D (1 << 25) /* Host supports Driver Type D */
#define MMC_CAP_CMD_DURING_TFR (1 << 29) /* Commands during data transfer */
#define MMC_CAP_CMD23 (1 << 30) /* CMD23 supported. */
#define MMC_CAP_HW_RESET (1 << 31) /* Hardware reset */
......@@ -382,6 +383,9 @@ struct mmc_host {
struct mmc_async_req *areq; /* active async req */
struct mmc_context_info context_info; /* async synchronization info */
/* Ongoing data transfer that allows commands during transfer */
struct mmc_request *ongoing_mrq;
#ifdef CONFIG_FAIL_MMC_REQUEST
struct fault_attr fail_mmc_request;
#endif
......@@ -418,6 +422,7 @@ int mmc_power_restore_host(struct mmc_host *host);
void mmc_detect_change(struct mmc_host *, unsigned long delay);
void mmc_request_done(struct mmc_host *, struct mmc_request *);
void mmc_command_done(struct mmc_host *host, struct mmc_request *mrq);
static inline void mmc_signal_sdio_irq(struct mmc_host *host)
{
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment