- 11 Sep, 2020 4 commits
-
-
Robin Murphy authored
Since commit 9495b7e9 ("driver core: platform: Initialize dma_parms for platform devices"), struct platform_device already provides a dma_parms structure, so we can save allocating another one. Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/7fad3c60cac2bf4f8dab791f8b6eafae90abc960.1599164692.git.robin.murphy@arm.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Robin Murphy authored
Since commit 9495b7e9 ("driver core: platform: Initialize dma_parms for platform devices"), struct platform_device already provides a dma_parms structure, so we can save allocating another one. Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/116927330a4a66aac579ad38ddbc3b538cd9524c.1599164692.git.robin.murphy@arm.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Robin Murphy authored
Since commit 9495b7e9 ("driver core: platform: Initialize dma_parms for platform devices"), struct platform_device already provides a dma_parms structure, so we can save allocating another one. Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/9b759e4c9eb37c90a3616d31abe13af6a6dafcd2.1599164692.git.robin.murphy@arm.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Brad Kim authored
Because a callback is called twice when DMA transfer complete the second callback may be possible to access a freed memory if the first callback routines perform the dma_release_channel function. So this patch serialized the callback functions Signed-off-by:
Brad Kim <brad.kim@semifive.com> Tested-and-reviewed-by:
Green Wan <green.wan@sifive.com> Signed-off-by:
Brad Kim <brad.kim@sifive.com> Link: https://lore.kernel.org/r/20200903111726.3413-1-brad.kim@sifive.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
- 03 Sep, 2020 9 commits
-
-
Lad Prabhakar authored
Renesas RZ/G SoC also have the R-Car gen2/3 compatible DMA controllers. Document RZ/G1H (also known as R8A7742) SoC bindings. Signed-off-by:
Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Reviewed-by:
Marian-Cristian Rotariu <marian-cristian.rotariu.rb@bp.renesas.com> Reviewed-by:
Geert Uytterhoeven <geert+renesas@glider.be> Acked-by:
Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20200903073813.4490-1-prabhakar.mahadev-lad.rj@bp.renesas.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
Export admin command status to sysfs attribute in order to allow user to retrieve configuration error. Allows user tooling to retrieve the command error and provide more user friendly error messages. Signed-off-by:
Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/159865278770.29455.8026892329182750127.stgit@djiang5-desk3.ch.intel.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
Add sysfs attribute max_batch_size to wq in order to allow the max batch size configured on a per wq basis. Add support code to configure the valid user input on wq enable. This is a performance tuning parameter. Signed-off-by:
Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/159865273617.29141.4383066301730821749.stgit@djiang5-desk3.ch.intel.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
Add sysfs attribute max_xfer_size to wq in order to allow the max xfer size configured on a per wq basis. Add support code to configure the valid user input on wq enable. This is a performance tuning parameter. Signed-off-by:
Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/159865265404.29141.3049399618578194052.stgit@djiang5-desk3.ch.intel.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Krzysztof Kozlowski authored
Common pattern of handling deferred probe can be simplified with dev_err_probe(). Less code and the error value gets printed. Signed-off-by:
Krzysztof Kozlowski <krzk@kernel.org> Link: https://lore.kernel.org/r/20200828152637.16903-3-krzk@kernel.orgSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Krzysztof Kozlowski authored
Common pattern of handling deferred probe can be simplified with dev_err_probe(). Less code and the error value gets printed. Signed-off-by:
Krzysztof Kozlowski <krzk@kernel.org> Link: https://lore.kernel.org/r/20200828152637.16903-2-krzk@kernel.orgSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Krzysztof Kozlowski authored
Common pattern of handling deferred probe can be simplified with dev_err_probe(). Less code and the error value gets printed. Signed-off-by:
Krzysztof Kozlowski <krzk@kernel.org> Link: https://lore.kernel.org/r/20200828152637.16903-1-krzk@kernel.orgSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Andy Shevchenko authored
Split IS_ERR_OR_NULL() check followed by additional conditional to two simple conditionals. This increases readability and saves memory: Function old new delta dma_request_chan 700 697 -3 Total: Before=10224, After=10221, chg -0.03% Signed-off-by:
Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20200828144519.14483-1-andriy.shevchenko@linux.intel.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Peter Ujfalusi authored
New drivers should use dma_request_chan() instead dma_request_slave_channel() dma_request_slave_channel() is a simple wrapper for dma_request_chan() eating up the error code for channel request failure and makes deferred probing impossible. Move the dma_request_slave_channel() into the header as inline function, mark it as deprecated. Signed-off-by:
Peter Ujfalusi <peter.ujfalusi@ti.com> Reviewed-by:
Andy Shevchenko <andy.shevchenko@gmail.com> Link: https://lore.kernel.org/r/20200828110507.22407-1-peter.ujfalusi@ti.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
- 28 Aug, 2020 1 commit
-
-
Peter Ujfalusi authored
No users left in the kernel, it can be removed. Signed-off-by:
Peter Ujfalusi <peter.ujfalusi@ti.com> Link: https://lore.kernel.org/r/20200828084141.14902-1-peter.ujfalusi@ti.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
- 25 Aug, 2020 9 commits
-
-
Alexandru Ardelean authored
Starting with core version 4.3.a the DMA bus attributes can (and should) be read from the INTERFACE_DESCRIPTION (0x10) register. For older core versions, this will still need to be provided from the device-tree. The bus-type values are identical to the ones stored in the device-trees, so we just need to read them. Bus-width values are stored in log2 values, so we just need to use them as shift values to make them equivalent to the current format. Signed-off-by:
Alexandru Ardelean <alexandru.ardelean@analog.com> Link: https://lore.kernel.org/r/20200825151950.57605-7-alexandru.ardelean@analog.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Alexandru Ardelean authored
The channel parameters (which are read from the device-tree) are adjusted for the DMAEngine framework in the axi_dmac_parse_chan_dt() function, after they are read from the device-tree. When we want to read these from registers, we will need to use the same logic, so this change splits the logic into a separate function. Signed-off-by:
Alexandru Ardelean <alexandru.ardelean@analog.com> Link: https://lore.kernel.org/r/20200825151950.57605-6-alexandru.ardelean@analog.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Alexandru Ardelean authored
All these attributes will be read from registers in newer core versions, so just wrap the logic into a function. Signed-off-by:
Alexandru Ardelean <alexandru.ardelean@analog.com> Link: https://lore.kernel.org/r/20200825151950.57605-5-alexandru.ardelean@analog.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Alexandru Ardelean authored
The clock may also be required to read registers from the IP core (if it is provided and the driver needs to control it). So, move it earlier in the probe. Signed-off-by:
Alexandru Ardelean <alexandru.ardelean@analog.com> Link: https://lore.kernel.org/r/20200825151950.57605-4-alexandru.ardelean@analog.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Alexandru Ardelean authored
We want to enable the clock right after it is obtained. Then later we'll want to read the core version via register-access (which requires the clock to be enabled). The initialization of the active_descs list can be postponed after reading from registers (or reading the device-tree). Signed-off-by:
Alexandru Ardelean <alexandru.ardelean@analog.com> Link: https://lore.kernel.org/r/20200825151950.57605-3-alexandru.ardelean@analog.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Alexandru Ardelean authored
The 'version' of the IP core will be needed to adapt the driver to a new feature (i.e. reading some DMA parameters from registers). To do that, the version will be checked, so this is being moved out of the axi_dmac_detect_caps() function. Signed-off-by:
Alexandru Ardelean <alexandru.ardelean@analog.com> Link: https://lore.kernel.org/r/20200825151950.57605-2-alexandru.ardelean@analog.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Łukasz Stelmach authored
Instruction dump uses two printk() in a row to print one instruction. Use KERN_CONT to prevent breaking the output in the middle. Signed-off-by:
Łukasz Stelmach <l.stelmach@samsung.com> Link: https://lore.kernel.org/r/20200813204123.19044-1-l.stelmach@samsung.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Gustavo Pimentel authored
Fix typo in comments offset related to padding bytes. Signed-off-by:
Gustavo Pimentel <gustavo.pimentel@synopsys.com> Link: https://lore.kernel.org/r/d7c7e56a83a13a62438a6c1a23863015a3760581.1597327654.git.gustavo.pimentel@synopsys.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Peter Ujfalusi authored
The direction has been already validated in the main callback and there is no need to check it again in the TR mode handlers for slave_sg and cyclic. Signed-off-by:
Peter Ujfalusi <peter.ujfalusi@ti.com> Link: https://lore.kernel.org/r/20200824120120.9270-1-peter.ujfalusi@ti.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
- 19 Aug, 2020 1 commit
-
-
Wei Yongjun authored
The sparse tool complains as follows: drivers/dma/xilinx/xilinx_dpdma.c:349:37: warning: symbol 'dpdma_debugfs_reqs' was not declared. Should it be static? This variable is not used outside of xilinx_dpdma.c, so this commit marks it static. Fixes: 1d220435 ("dmaengine: xilinx: dpdma: Add debugfs support") Reported-by:
Hulk Robot <hulkci@huawei.com> Signed-off-by:
Wei Yongjun <weiyongjun1@huawei.com> Reviewed-by:
Hyun Kwon <hyun.kwon@xilinx.com> Reviewed-by:
Laurent Pinchart <laurent.pinchart@ideasonboard.com> Link: https://lore.kernel.org/r/20200818112217.43816-1-weiyongjun1@huawei.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
- 17 Aug, 2020 12 commits
-
-
Serge Semin authored
DW DMA IP-core provides a way to synthesize the DMA controller with channels having different parameters like maximum burst-length, multi-block support, maximum data width, etc. Those parameters both explicitly and implicitly affect the channels performance. Since DMA slave devices might be very demanding to the DMA performance, let's provide a functionality for the slaves to be assigned with DW DMA channels, which performance according to the platform engineer fulfill their requirements. After this patch is applied it can be done by passing the mask of suitable DMA-channels either directly in the dw_dma_slave structure instance or as a fifth cell of the DMA DT-property. If mask is zero or not provided, then there is no limitation on the channels allocation. For instance Baikal-T1 SoC is equipped with a DW DMAC engine, which first two channels are synthesized with max burst length of 16, while the rest of the channels have been created with max-burst-len=4. It would seem that the first two channels must be faster than the others and should be more preferable for the time-critical DMA slave devices. In practice it turned out that the situation is quite the opposite. The channels with max-burst-len=4 demonstrated a better performance than the channels with max-burst-len=16 even when they both had been initialized with the same settings. The performance drop of the first two DMA-channels made them unsuitable for the DW APB SSI slave device. No matter what settings they are configured with, full-duplex SPI transfers occasionally experience the Rx FIFO overflow. It means that the DMA-engine doesn't keep up with incoming data pace even though the SPI-bus is enabled with speed of 25MHz while the DW DMA controller is clocked with 50MHz signal. There is no such problem has been noticed for the channels synthesized with max-burst-len=4. Signed-off-by:
Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200731200826.9292-6-Sergey.Semin@baikalelectronics.ruSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Serge Semin authored
According to the DW DMA controller Databook 2.18b (page 40 "3.5 Memory Peripherals") memory peripherals don't have handshaking interface connected to the controller, therefore they can never be a flow controller. Since the CTLx.SRC_MSIZE and CTLx.DEST_MSIZE are properties valid only for peripherals with a handshaking interface, we can freely zero these fields out if the memory peripheral is selected to be the source or the destination of the DMA transfers. Note according to the databook, length of burst transfers to memory is always equal to the number of data items available in a channel FIFO or data items required to complete the block transfer, whichever is smaller; length of burst transfers from memory is always equal to the space available in a channel FIFO or number of data items required to complete the block transfer, whichever is smaller. Signed-off-by:
Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200731200826.9292-5-Sergey.Semin@baikalelectronics.ruSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Serge Semin authored
Indeed in case of the DMA_DEV_TO_MEM DMA transfers it's enough to take the destination memory address and the destination master data width into account to calculate the CTLx.DST_TR_WIDTH setting of the memory peripheral. According to the DW DMAC IP-core Databook 2.18b (page 66, Example 5) at the and of a DMA transfer when the DMA-channel internal FIFO is left with data less than for a single destination burst transaction, the destination peripheral will enter the Single Transaction Region where the DW DMA controller can complete a block transfer to the destination using single transactions (non-burst transaction of CTLx.DST_TR_WIDTH bytes). If there is no enough data in the DMA-channel internal FIFO for even a single non-burst transaction of CTLx.DST_TR_WIDTH bytes, then the channel enters "FIFO flush mode". That mode is activated to empty the FIFO and flush the leftovers out to the memory peripheral. The flushing procedure is simple. The data is sent to the memory by means of a set of single transaction of CTLx.SRC_TR_WIDTH bytes. To sum up it's redundant to use the LLPs length to find out the CTLx.DST_TR_WIDTH parameter value, since each DMA transfer will be completed with the CTLx.SRC_TR_WIDTH bytes transaction if it is required. We suggest to remove the LLP entry length from the statement which calculates the memory peripheral DMA transaction width since it's redundant due to the feature described above. By doing so we'll improve the memory bus utilization and speed up the DMA-channel performance for DMA_DEV_TO_MEM DMA-transfers. Signed-off-by:
Serge Semin <Sergey.Semin@baikalelectronics.ru> Acked-by:
Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20200731200826.9292-4-Sergey.Semin@baikalelectronics.ruSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Serge Semin authored
CFGx.FIFO_MODE field controls a DMA-controller "FIFO readiness" criterion. In other words it determines when to start pushing data out of a DW DMAC channel FIFO to a destination peripheral or from a source peripheral to the DW DMAC channel FIFO. Currently FIFO-mode is set to one for all DW DMAC channels. It means they are tuned to flush data out of FIFO (to a memory peripheral or by accepting the burst transaction requests) when FIFO is at least half-full (except at the end of the block transfer, when FIFO-flush mode is activated) and are configured to get data to the FIFO when it's at least half-empty. Such configuration is a good choice when there is no slave device involved in the DMA transfers. In that case the number of bursts per block is less than when CFGx.FIFO_MODE = 0 and, hence, the bus utilization will improve. But the latency of DMA transfers may increase when CFGx.FIFO_MODE = 1, since DW DMAC will wait for the channel FIFO contents to be either half-full or half-empty depending on having the destination or the source transfers. Such latencies might be dangerous in case if the DMA transfers are expected to be performed from/to a slave device. Since normally peripheral devices keep data in internal FIFOs, any latency at some critical moment may cause one being overflown and consequently losing data. This especially concerns a case when either a peripheral device is relatively fast or the DW DMAC engine is relatively slow with respect to the incoming data pace. In order to solve problems, which might be caused by the latencies described above, let's enable the FIFO half-full/half-empty "FIFO readiness" criterion only for DMA transfers with no slave device involved. Thanks to the commit 99ba8b9b ("dmaengine: dw: Initialize channel before each transfer") we can freely do that in the generic dw_dma_initialize_chan() method. Signed-off-by:
Serge Semin <Sergey.Semin@baikalelectronics.ru> Reviewed-by:
Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20200731200826.9292-3-Sergey.Semin@baikalelectronics.ruSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Serge Semin authored
Each DW DMA controller channel can be synthesized with different parameters like maximum burst-length, multi-block support, maximum data width, etc. Most of these parameters determine the DW DMAC channels performance in its own aspect. On the other hand these parameters can be implicitly responsible for the channels performance degradation (for instance multi-block support is a very useful feature, but having it disabled during the DW DMAC synthesize will provide a more optimized core). Since DMA slave devices may have critical dependency on the DMA engine performance, let's provide a way for the slave devices to have the DMA-channels allocated from a pool of the channels, which according to the system engineer fulfill their performance requirements. The pool is determined by a mask optionally specified in the fifth DMA-cell of the DMA DT-property. If the fifth cell is omitted from the phandle arguments or the mask is zero, then the allocation will be performed from a set of all channels provided by the DMA controller. Signed-off-by:
Serge Semin <Sergey.Semin@baikalelectronics.ru> Reviewed-by:
Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20200731200826.9292-2-Sergey.Semin@baikalelectronics.ruSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Laurent Pinchart authored
Expose statistics to debugfs when available. This helps debugging issues with the DPDMA driver. Signed-off-by:
Hyun Kwon <hyun.kwon@xilinx.com> Signed-off-by:
Laurent Pinchart <laurent.pinchart@ideasonboard.com> Link: https://lore.kernel.org/r/20200812171228.9751-1-laurent.pinchart@ideasonboard.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Peter Ujfalusi authored
The security accelerator within MCU domain supports two ports similarly to the SA2UL in MAIN domain. Add endpoint configuration for the two ingress and one egress threads of the second port. Signed-off-by:
Peter Ujfalusi <peter.ujfalusi@ti.com> Link: https://lore.kernel.org/r/20200803100724.19003-1-peter.ujfalusi@ti.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Peter Ujfalusi authored
Add new PSI-L map file for the new TI j7200 SoC. The DMA hardware in j7200 is the same as in j721e with different set of peripherals resulting different PSI-L thread map compered to j721e. See J7200 Technical Reference Manual (SPRUIU1, June 2020) for further details: https://www.ti.com/lit/pdf/spruiu1Signed-off-by:
Peter Ujfalusi <peter.ujfalusi@ti.com> Link: https://lore.kernel.org/r/20200803125713.17829-3-peter.ujfalusi@ti.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Peter Ujfalusi authored
Instead of separate of_machine_is_compatible() it is better to use soc_device_match() and soc_device_attribute struct to get the PSI-L map for the booted device. By using soc_device_match() it is easier to add support for new devices. Signed-off-by:
Peter Ujfalusi <peter.ujfalusi@ti.com> Link: https://lore.kernel.org/r/20200803125713.17829-2-peter.ujfalusi@ti.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
Move the clearing of misc interrupt cause to immediately after reading the register in order to allow the next interrupt to be asserted. Suggested-by:
Nikhil Rao <nikhil.rao@intel.com> Signed-off-by:
Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/159603824665.28647.5344356370364397996.stgit@djiang5-desk3.ch.intel.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Krzysztof Kozlowski authored
The of_device_id is included unconditionally by of.h header and used in the driver as well. Remove of_match_ptr to fix W=1 compile test warning with !CONFIG_OF: drivers/dma/ti/omap-dma.c:1892:34: warning: 'omap_dma_match' defined but not used [-Wunused-const-variable=] 1892 | static const struct of_device_id omap_dma_match[] = { Signed-off-by:
Krzysztof Kozlowski <krzk@kernel.org> Reviewed-by:
Laurent Pinchart <laurent.pinchart@ideasonboard.com> Link: https://lore.kernel.org/r/20200728170939.28278-1-krzk@kernel.orgSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
Vaibhav Gupta authored
Drivers using legacy PM have to manage PCI states and device's PM states themselves. They also need to take care of configuration registers. With improved and powerful support of generic PM, PCI Core takes care of above mentioned, device-independent, jobs. This driver makes use of PCI helper functions like pci_save/restore_state(), pci_enable/disable_device(), and pci_set_power_state() to do required operations. In generic mode, they are no longer needed. Change function parameter in both .suspend() and .resume() to "struct device*" type. Use dev_get_drvdata() to get drv data. Compile-tested only. Signed-off-by:
Vaibhav Gupta <vaibhavgupta40@gmail.com> Reviewed-by:
Andy Shevchenko <andy.shevchenko@gmail.com> Link: https://lore.kernel.org/r/20200720113740.353479-1-vaibhavgupta40@gmail.comSigned-off-by:
Vinod Koul <vkoul@kernel.org>
-
- 16 Aug, 2020 4 commits
-
-
Linus Torvalds authored
-
git://git.kernel.dk/linux-blockLinus Torvalds authored
Pull io_uring fixes from Jens Axboe: "A few differerent things in here. Seems like syzbot got some more io_uring bits wired up, and we got a handful of reports and the associated fixes are in here. General fixes too, and a lot of them marked for stable. Lastly, a bit of fallout from the async buffered reads, where we now more easily trigger short reads. Some applications don't really like that, so the io_read() code now handles short reads internally, and got a cleanup along the way so that it's now easier to read (and documented). We're now passing tests that failed before" * tag 'io_uring-5.9-2020-08-15' of git://git.kernel.dk/linux-block: io_uring: short circuit -EAGAIN for blocking read attempt io_uring: sanitize double poll handling io_uring: internally retry short reads io_uring: retain iov_iter state over io_read/io_write calls task_work: only grab task signal lock when needed io_uring: enable lookup of links holding inflight files io_uring: fail poll arm on queue proc failure io_uring: hold 'ctx' reference around task_work queue + execute fs: RWF_NOWAIT should imply IOCB_NOIO io_uring: defer file table grabbing request cleanup for locked requests io_uring: add missing REQ_F_COMP_LOCKED for nested requests io_uring: fix recursive completion locking on oveflow flush io_uring: use TWA_SIGNAL for task_work uncondtionally io_uring: account locked memory before potential error case io_uring: set ctx sq/cq entry count earlier io_uring: Fix NULL pointer dereference in loop_rw_iter() io_uring: add comments on how the async buffered read retry works io_uring: io_async_buf_func() need not test page bit
-
Mike Rapoport authored
Commit 1355c31e ("asm-generic: pgalloc: provide generic pmd_alloc_one() and pmd_free_one()") converted parisc to use generic version of pmd_alloc_one() but it missed the fact that parisc uses order-1 pages for PMD. Restore the original version of pmd_alloc_one() for parisc, just use GFP_PGTABLE_KERNEL that implies __GFP_ZERO instead of GFP_KERNEL and memset. Fixes: 1355c31e ("asm-generic: pgalloc: provide generic pmd_alloc_one() and pmd_free_one()") Reported-by:
Meelis Roos <mroos@linux.ee> Signed-off-by:
Mike Rapoport <rppt@linux.ibm.com> Tested-by:
Meelis Roos <mroos@linux.ee> Reviewed-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Link: https://lkml.kernel.org/r/9f2b5ebd-e4a4-0fa1-6cd3-4b9f6892d1ad@linux.eeSigned-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.kernel.dk/linux-blockLinus Torvalds authored
Pull block fixes from Jens Axboe: "A few fixes on the block side of things: - Discard granularity fix (Coly) - rnbd cleanups (Guoqing) - md error handling fix (Dan) - md sysfs fix (Junxiao) - Fix flush request accounting, which caused an IO slowdown for some configurations (Ming) - Properly propagate loop flag for partition scanning (Lennart)" * tag 'block-5.9-2020-08-14' of git://git.kernel.dk/linux-block: block: fix double account of flush request's driver tag loop: unset GENHD_FL_NO_PART_SCAN on LOOP_CONFIGURE rnbd: no need to set bi_end_io in rnbd_bio_map_kern rnbd: remove rnbd_dev_submit_io md-cluster: Fix potential error pointer dereference in resize_bitmaps() block: check queue's limits.discard_granularity in __blkdev_issue_discard() md: get sysfs entry after redundancy attr group create
-