- 20 Jul, 2011 40 commits
-
-
Per Forlin authored
Break out code from mmc_blk_issue_rw_rq to create a block request prepare function. This doesn't change any functionallity. This helps when handling more than one active block request. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
The way the request data is organized in the mmc queue struct, it only allows processing of one request at a time. This patch adds a new struct to hold mmc queue request data such as sg list, request, blk request and bounce buffers, and updates any functions depending on the mmc queue struct. This prepares for using multiple active requests in one mmc queue. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
Add a test that measures how the mmc bandwidth depends on the numbers of sg elements in the sg list. The transfer size if fixed and sg length goes from a few up to 512. The purpose is to measure overhead caused by multiple sg elements. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
Add four tests for read and write performance per different transfer size, 4k to 4M. * Read using blocking mmc request * Read using non-blocking mmc request * Write using blocking mmc request * Write using non-blocking mmc request The host driver must support pre_req() and post_req() in order to run the non-blocking test cases. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar<sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
Add a debugfs file "testlist" to print all available tests. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar<sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
pre_req() runs dma_map_sg() and prepares the dma descriptor for the next mmc data transfer. post_req() runs dma_unmap_sg. If not calling pre_req() before mmci_request(), mmci_request() will prepare the cache and dma just like it did it before. It is optional to use pre_req() and post_req() for mmci. Signed-off-by: Per Forlin <per.forlin@linaro.org> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
pre_req() runs dma_map_sg(), post_req() runs dma_unmap_sg. If not calling pre_req() before omap_hsmmc_request(), dma_map_sg will be issued before starting the transfer. It is optional to use pre_req(). If issuing pre_req(), post_req() must be called as well. Signed-off-by: Per Forlin <per.forlin@linaro.org> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
Previously there has only been one function mmc_wait_for_req() to start and wait for a request. This patch adds: * mmc_start_req() - starts a request wihtout waiting If there is on ongoing request wait for completion of that request and start the new one and return. Does not wait for the new command to complete. This patch also adds new function members in struct mmc_host_ops only called from core.c: * pre_req - asks the host driver to prepare for the next job * post_req - asks the host driver to clean up after a completed job The intention is to use pre_req() and post_req() to do cache maintenance while a request is active. pre_req() can be called while a request is active to minimize latency to start next job. post_req() can be used after the next job is started to clean up the request. This will minimize the host driver request end latency. post_req() is typically used before ending the block request and handing over the buffer to the block layer. Add a host-private member in mmc_data to be used by pre_req to mark the data. The host driver will then check this mark to see if the data is prepared or not. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Madhusudhan Chikkature authored
Update the OMAP HSMMC entry from the MAINTAINERS file as I will no longer be able to maintain this driver. Signed-off-by: Madhusudhan Chikkature <madhu.cr@ti.com> [khilman@ti.com: change to Orphan rather than complete removal] Signed-off-by: Kevin Hilman <khilman@ti.com> Acked-by: Venkatraman S <svenkatr@ti.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Nicolas Ferre authored
This driver has been used for years with this option enabled. Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Nicolas Ferre authored
Take care of slots while going to suspend state. Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com> Reviewed-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Adrian Hunter authored
Unless MMC_CAP_8_BIT_DATA is set, the bus width defaults to 4. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Major Lee authored
And hook platform_8bit_width to support 8-bit bus width. Signed-off-by: Major Lee <major_lee@wistron.com> Signed-off-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Dirk Brandewie <dirk.brandewie@gmail.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
James Hogan authored
If an error occurs mid way through a transaction (such as a missing CRC status response after the 2nd block written out of 3), then the FIFO may still contain data which will interfere with the next transaction. Therefore after an error has been detected, reset the fifo using the CTRL register. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Tested-by: Jaehoon Chung <jh80.chung@samsung.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
James Hogan authored
When a data write isn't acknowledged by the card (so no CRC status token is detected after the data), the error -EIO is returned instead of the -ETIMEDOUT expected by mmc_test 15 - "Correct xfer_size at write (start failure)" and 17 "Correct xfer_size at write (midway failure)". In PIO mode the reported number of bytes transferred is also exaggerated since the last block actually failed. Handle the "Write no CRC" error specially, setting the error to -ETIMEDOUT and setting the bytes_xferred to 0. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Tested-by: Jaehoon Chung <jh80.chung@samsung.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
James Hogan authored
Remove error messages for timeout and CRC failure, since the error code already indicates the problem. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Tested-by: Jaehoon Chung <jh80.chung@samsung.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
James Hogan authored
There are several situations when dw_mci_submit_data_dma() decides to fall back to PIO mode instead of using DMA, due to a short (to avoid overhead) or "complex" (e.g. with unaligned buffers) transaction, even though host->use_dma is set. However dw_mci_stop_dma() decides whether to stop DMA or set the EVENT_XFER_COMPLETE event based on host->use_dma. When falling back to PIO mode this results in data timeout errors getting missed and the driver locking up. Therefore add host->using_dma to indicate whether the current transaction is using dma or not, and adjust dw_mci_stop_dma() to use that instead. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Tested-by: Jaehoon Chung <jh80.chung@samsung.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Wonil Choi authored
Signed-off-by: Wonil Choi <wonil22.choi@samsung.com> Signed-off-by: Minho Ban <mhban@samsung.com> Cc: Ben Dooks <ben-linux@fluff.org> Signed-off-by: Kukjin Kim <kgene.kim@samsung.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Adrian Hunter authored
In general, SDHC hardware timeout cannot be avoided. Accordingly, the maximum timeout is specified to limit the maximum discard size. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Adrian Hunter authored
Some host controllers will not operate without a hardware timeout that is limited in value. However large discards require large timeouts, so there needs to be a way to specify the maximum discard size. A host controller driver may now specify the maximum discard timeout possible so that max_discard_sectors can be calculated. However, for eMMC when the High Capacity Erase Group Size is not in use, the timeout calculation depends on clock rate which may change. For that case Preferred Erase Size is used instead. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Shawn Guo authored
The use of flag ESDHC_FLAG_GPIO_FOR_CD_WP is all CD related. It does not necessarily need to bother WP in the flag name. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Shawn Guo authored
The function esdhc_readl_le intends to clear bit SDHCI_CARD_PRESENT, when the card detect gpio tells there is no card. But it does not clear the bit actually. The patch gives a fix on that. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Acked-by: Wolfram Sang <w.sang@pengutronix.de> Cc: <stable@kernel.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Shawn Guo authored
The issue was initially found by Eric Benard as below. http://permalink.gmane.org/gmane.linux.ports.arm.kernel/108031 Not sure about other SDHCI based controller, but on Freescale eSDHC, the SDHCI_INT_CARD_INSERT bits will be immediately set again when it gets cleared, if a card is inserted. The driver need to mask the irq to prevent interrupt storm which will freeze the system. And the SDHCI_INT_CARD_REMOVE gets the same situation. The patch fixes the problem based on the initial idea from Eric Benard. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Cc: Eric Benard <eric@eukrea.com> Tested-by: Arnaud Patard <arnaud.patard@rtp-net.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Paul Parsons authored
There is a race condition in the tmio_mmc_irq() interrupt handler, caused by the presence of a while loop, which results in warnings of spurious interrupts. This was found on an HP iPAQ hx4700 whose HTC ASIC3 reportedly incorporates the Toshiba TC6380AF controller. Towards the end of a multiple read (CMD18) operation the handler clears the final RXRDY status bit in the first loop iteration, sees the DATAEND status bit at the bottom of the loop, and so clears the DATAEND status bit in the second loop iteration. However the DATAEND interrupt is still queued in the system somewhere and can't be delivered until the handler has returned. This second interrupt is then reported as spurious in the next call to the handler. Likewise for single read (CMD17) operations. And something similar occurs for multiple write (CMD25) and single write (CMD24) operations, where CMDRESPEND and TXRQ status bits are cleared in a single call. In these cases the interrupt handler clears two separate interrupts when it should only clear the one interrupt for which it was invoked. The fix is to remove the while loop. Signed-off-by: Paul Parsons <lost.distance@yahoo.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Paul Parsons authored
Only compile tmio_mmc_dma.o when CONFIG_MMC_SDHI is selected (as y or m). Signed-off-by: Paul Parsons <lost.distance@yahoo.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
James Hogan authored
Update functions for PIO pushing and pulling data to and from the FIFO so that they can handle unaligned output buffers and unaligned buffer lengths. This makes more of the tests in mmc_test pass. Unaligned lengths in pulls are handled by reading the full FIFO item, and storing the remaining bytes in a small internal buffer (part_buf). The next data pull will copy data out of this buffer first before accessing the FIFO again. Similarly, for pushes the final bytes that don't fill a FIFO item are stored in the part_buf (or sent anyway if it's the last transfer), and then the part_buf is included at the beginning of the next buffer pushed. Unaligned buffers in pulls are handled specially if the architecture cannot do efficient unaligned accesses, by reading FIFO items into a aligned local buffer, and memcpy'ing them into the output buffer, again storing any remaining bytes in the internal buffer. Similarly for pushes the buffer is memcpy'd into an aligned local buffer then written to the FIFO. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
James Hogan authored
The FIFO_DEPTH hardware configuration parameter can be found from the power-on value of RX_WMark in the FIFOTH register. This is used to initialise the watermarks, but when calculating the number of free fifo spaces a preprocessor definition is used which is hard coded to 32. Fix reading the value out of FIFOTH (the default value in the RX_WMark field is FIFO_DEPTH-1 not FIFO_DEPTH). Allow the fifo depth to be overriden by platform data (since a bootloader may have changed FIFOTH making auto-detection unreliable). Store the fifo_depth for later use. Also fix the calculation to find the number of free bytes in the fifo to include the fifo depth in the left shift by the data shift, since the fifo depth is measured in fifo items not bytes. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
James Hogan authored
Add brackets around use of the dev argument to the mci_{read,write}{w,l,q}() macros, for extra safety. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
James Hogan authored
Convert the card insert/remove tasklet to a workqueue, and call the setpower platform specific callback without the spinlock held. This means neither of the setpower or get_cd callbacks are called from atomic context which allows them to sleep. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
James Hogan authored
When a request is made, the card presence is checked and the request is queued. These two parts must be atomic with respect to card removal, or a card removal could be handled in between, and the new request wouldn't get cancelled until another card was inserted. Therefore move the spinlock protection from dw_mci_queue_request() up into dw_mci_request() to cover the presence check. Note that the test_bit() used for the presence check isn't atomic itself, so should have been protected by a spinlock anyway. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
James Hogan authored
DMA is only used for transactions exceeding a certain length, otherwise PIO is used. The TXDR and RXDR interrupts are masked when in DMA mode but still fire. When switching to PIO mode (e.g. to get SCR field when an SD card is inserted) these interrupts are not cleared and so they trigger the ISR as soon as they are unmasked. If the previous DMA did a write, then the ISR will handle the TXDR interrupt even if the transaction is a read, completing the transaction without modifying the read buffer. This is fixed primarily by clearing these two interrupts before unmasking them when setting up PIO mode, and also by making the ISR more robust by only handling TXDR/RXDR in the correct read/write direction. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Simon Horman authored
Some controllers require waiting for the bus to become idle before writing to some registers. I have implemented this by adding a hook to sd_ctrl_write16() and implementing a hook for SDHI which waits for the bus to become idle. Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Cc: Magnus Damm <magnus.damm@gmail.com> Signed-off-by: Simon Horman <horms@verge.net.au> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Simon Horman authored
Move register access functions into a shared header. Use sd_ctrl_write16 in tmio_mmc_dma.c:tmio_mmc_enable_dma(). Other than avoiding (trivial) open-coding, the motivation for this is to allow platform-hooks in access functions to be applied across all applicable accesses. Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Cc: Magnus Damm <magnus.damm@gmail.com> Signed-off-by: Simon Horman <horms@verge.net.au> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Simon Horman authored
This reflects at least the current usage of this register and I think it improves the readability of the code ever so slightly. Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Cc: Magnus Damm <magnus.damm@gmail.com> Signed-off-by: Simon Horman <horms@verge.net.au> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Russell King - ARM Linux authored
Check the status bits in the r/w command response for any errors. If error bits are set, then we won't have seen any data transferred, so it's pointless doing any further checking. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Pawel Moll <pawel.moll@arm.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Russell King - ARM Linux authored
Command channel errors fall into four classes: 1. The command was issued with the card in the wrong state 2. The command failed to be received by the card correctly 3. The cards response failed to be received by the host (CRC error) 4. The card failed to respond to the card For (1), in theory we should know that the card is in the correct state. However, a failed stop command (or other failure) may result in the card remaining in a data transfer state from the previous command. If we detect this condition, we try to recover by sending a stop command. For the initial commands (set block count and the read/write command) no data will have been transferred. All that we need deal with is retrying at this point. A failed stop command can be remedied as above. If we are unable to recover the card (eg, the card ignores our requests for status, or we don't recognise the error code) then we immediately fail the request. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Pawel Moll <pawel.moll@arm.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Russell King - ARM Linux authored
If the MMC_SEND_STATUS command is not successful, we should not return a zero status word, but instead allow the caller to know positively that an error occurred. Convert the open-coded get_card_status() to use the helper function, and provide definitions for the card state field. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Pawel Moll <pawel.moll@arm.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Seungwon Jeon authored
Response timeout (RTO), Response crc error (RCRC) and Response error (RE) signals come with command done (CD) and can be raised preceding command done (CD). That is these error interrupts and CD can be handled in separate dw_mci_interrupt(). If mmc_request_done() is called because of a response timeout before command done has occured, we might send the next request before the CD of current request is finished. This can bring about a broken sequence of request and request-done. And Data error interrupt (DRTO, DCRC, SBE, EBE) and data transfer over (DTO) have the same problem. Signed-off-by: Seungwon Jeon <tgih.jun@samsung.com> Acked-by: Will Newton <will.newton@imgtec.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Seungwon Jeon authored
This patch sets the card_width bit of CTYPE for the corresponding card. CTYPE[31] and CTYPE[16] correspond respectively to card[15] and card[0] for 8-bit mode. And CTYPE[15] and CTYPE[0] correspond respectively to card[15] and CTYPE[0] for 1-bit or 4-bit mode. Signed-off-by: Seungwon Jeon <tgih.jun@samsung.com> Acked-by: Will Newton <will.newton@imgtec.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Zhangfei Gao authored
1. support brownstone 2. support mmc 3. support basic filesystem and language 4. remove dynamic_debug, since too many log during access sd Signed-off-by: Zhangfei Gao <zhangfei.gao@marvell.com> Acked-by: Philip Rakity <prakity@marvell.com> Acked-by: Mark F. Brown <mark.brown314@gmail.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-