- 10 Jun, 2019 8 commits
-
-
Geert Uytterhoeven authored
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Alexandru Ardelean authored
The change replaces the old license information in the comment header with the new SPDX license specifier. As well as bumping the year range from 2013-2015 to 2013-2019. The latter also reflects recent changes that were added to the driver. Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Gustavo Pimentel authored
Add Synopsys eDMA IP driver maintainer. This driver aims to support Synopsys eDMA IP and is normally distributed along with Synopsys PCIe EndPoint IP (depends of the use and licensing agreement). Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Cc: Vinod Koul <vkoul@kernel.org> Cc: Joao Pinto <jpinto@synopsys.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Gustavo Pimentel authored
Synopsys eDMA IP is normally distributed along with Synopsys PCIe EndPoint IP (depends of the use and licensing agreement). This IP requires some basic configurations, such as: - eDMA registers BAR - eDMA registers offset - eDMA registers size - eDMA linked list memory BAR - eDMA linked list memory offset - eDMA linked list memory size - eDMA data memory BAR - eDMA data memory offset - eDMA data memory size - eDMA version - eDMA mode - IRQs available for eDMA As a working example, PCIe glue-logic will attach to a Synopsys PCIe EndPoint IP prototype kit (Vendor ID = 0x16c3, Device ID = 0xedda), which has built-in an eDMA IP with this default configuration: - eDMA registers BAR = 0 - eDMA registers offset = 0x00001000 (4 Kbytes) - eDMA registers size = 0x00002000 (8 Kbytes) - eDMA linked list memory BAR = 2 - eDMA linked list memory offset = 0x00000000 (0 Kbytes) - eDMA linked list memory size = 0x00800000 (8 Mbytes) - eDMA data memory BAR = 2 - eDMA data memory offset = 0x00800000 (8 Mbytes) - eDMA data memory size = 0x03800000 (56 Mbytes) - eDMA version = 0 - eDMA mode = EDMA_MODE_UNROLL - IRQs = 1 This driver can be compile as built-in or external module in kernel. To enable this driver just select DW_EDMA_PCIE option in kernel configuration, however it requires and selects automatically DW_EDMA option too. Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Cc: Vinod Koul <vkoul@kernel.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Russell King <rmk+kernel@armlinux.org.uk> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Joao Pinto <jpinto@synopsys.com> Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Gustavo Pimentel authored
Create and add Synopsys Endpoint EDDA Device ID to PCI ID list, since this ID is now being use on two different drivers (pci_endpoint_test.ko and dw-edma-pcie.ko). Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Kishon Vijay Abraham I <kishon@ti.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Joao Pinto <jpinto@synopsys.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Gustavo Pimentel authored
Add Synopsys eDMA IP version 0 debugfs support to assist any debug in the future. Creates a file system structure composed by folders and files that mimic the IP register map (this files are read only) to ease any debug. To enable this feature is necessary to select DEBUG_FS option on kernel configuration. Small output example: (eDMA IP version 0, unroll, 1 write + 1 read channels) % mount -t debugfs none /sys/kernel/debug/ % tree /sys/kernel/debug/dw-edma-core:0/ dw-edma/ ├── version ├── mode ├── wr_ch_cnt ├── rd_ch_cnt └── registers ├── ctrl_data_arb_prior ├── ctrl ├── write │ ├── engine_en │ ├── doorbell │ ├── ch_arb_weight_low │ ├── ch_arb_weight_high │ ├── int_status │ ├── int_mask │ ├── int_clear │ ├── err_status │ ├── done_imwr_low │ ├── done_imwr_high │ ├── abort_imwr_low │ ├── abort_imwr_high │ ├── ch01_imwr_data │ ├── ch23_imwr_data │ ├── ch45_imwr_data │ ├── ch67_imwr_data │ ├── linked_list_err_en │ ├── engine_chgroup │ ├── engine_hshake_cnt_low │ ├── engine_hshake_cnt_high │ ├── ch0_pwr_en │ ├── ch1_pwr_en │ ├── ch2_pwr_en │ ├── ch3_pwr_en │ ├── ch4_pwr_en │ ├── ch5_pwr_en │ ├── ch6_pwr_en │ ├── ch7_pwr_en │ └── channel:0 │ ├── ch_control1 │ ├── ch_control2 │ ├── transfer_size │ ├── sar_low │ ├── sar_high │ ├── dar_high │ ├── llp_low │ └── llp_high └── read ├── engine_en ├── doorbell ├── ch_arb_weight_low ├── ch_arb_weight_high ├── int_status ├── int_mask ├── int_clear ├── err_status_low ├── err_status_high ├── done_imwr_low ├── done_imwr_high ├── abort_imwr_low ├── abort_imwr_high ├── ch01_imwr_data ├── ch23_imwr_data ├── ch45_imwr_data ├── ch67_imwr_data ├── linked_list_err_en ├── engine_chgroup ├── engine_hshake_cnt_low ├── engine_hshake_cnt_high ├── ch0_pwr_en ├── ch1_pwr_en ├── ch2_pwr_en ├── ch3_pwr_en ├── ch4_pwr_en ├── ch5_pwr_en ├── ch6_pwr_en ├── ch7_pwr_en └── channel:0 ├── ch_control1 ├── ch_control2 ├── transfer_size ├── sar_low ├── sar_high ├── dar_high ├── llp_low └── llp_high Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Cc: Vinod Koul <vkoul@kernel.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Russell King <rmk+kernel@armlinux.org.uk> Cc: Joao Pinto <jpinto@synopsys.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Gustavo Pimentel authored
Add support for the eDMA IP version 0 driver for both register maps (legacy and unroll). The legacy register mapping was the initial implementation, which consisted in having all registers belonging to channels multiplexed, which could be change anytime (which could led a race-condition) by view port register (access to only one channel available each time). This register mapping is not very effective and efficient in a multithread environment, which has led to the development of unroll registers mapping, which consists of having all channels registers accessible any time by spreading all channels registers by an offset between them. This version supports a maximum of 16 independent channels (8 write + 8 read), which can run simultaneously. Implements a scatter-gather transfer through a linked list, where the size of linked list depends on the allocated memory divided equally among all channels. Each linked list descriptor can transfer from 1 byte to 4 Gbytes and is alignmented to DWORD. Both SAR (Source Address Register) and DAR (Destination Address Register) are alignmented to byte. Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Cc: Vinod Koul <vkoul@kernel.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Russell King <rmk+kernel@armlinux.org.uk> Cc: Joao Pinto <jpinto@synopsys.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Gustavo Pimentel authored
Add Synopsys PCIe Endpoint eDMA IP core driver to kernel. This IP is generally distributed with Synopsys PCIe Endpoint IP (depends of the use and licensing agreement). This core driver, initializes and configures the eDMA IP using vma-helpers functions and dma-engine subsystem. This driver can be compile as built-in or external module in kernel. To enable this driver just select DW_EDMA option in kernel configuration, however it requires and selects automatically DMA_ENGINE and DMA_VIRTUAL_CHANNELS option too. In order to transfer data from point A to B as fast as possible this IP requires a dedicated memory space containing linked list of elements. All elements of this linked list are continuous and each one describes a data transfer (source and destination addresses, length and a control variable). For the sake of simplicity, lets assume a memory space for channel write 0 which allows about 42 elements. +---------+ | Desc #0 |-+ +---------+ | V +----------+ | Chunk #0 |-+ | CB = 1 | | +----------+ +-----+ +-----------+ +-----+ +----------+ +->| Burst #0 |->| ... |->| Burst #41 |->| llp | | +----------+ +-----+ +-----------+ +-----+ V +----------+ | Chunk #1 |-+ | CB = 0 | | +-----------+ +-----+ +-----------+ +-----+ +----------+ +->| Burst #42 |->| ... |->| Burst #83 |->| llp | | +-----------+ +-----+ +-----------+ +-----+ V +----------+ | Chunk #2 |-+ | CB = 1 | | +-----------+ +-----+ +------------+ +-----+ +----------+ +->| Burst #84 |->| ... |->| Burst #125 |->| llp | | +-----------+ +-----+ +------------+ +-----+ V +----------+ | Chunk #3 |-+ | CB = 0 | | +------------+ +-----+ +------------+ +-----+ +----------+ +->| Burst #126 |->| ... |->| Burst #129 |->| llp | +------------+ +-----+ +------------+ +-----+ Legend: - Linked list, also know as Chunk - Linked list element*, also know as Burst *CB*, also know as Change Bit, it's a control bit (and typically is toggled) that allows to easily identify and differentiate between the current linked list and the previous or the next one. - LLP, is a special element that indicates the end of the linked list element stream also informs that the next CB should be toggle On every last Burst of the Chunk (Burst #41, Burst #83, Burst #125 or even Burst #129) is set some flags on their control variable (RIE and LIE bits) that will trigger the send of "done" interruption. On the interruptions callback, is decided whether to recycle the linked list memory space by writing a new set of Bursts elements (if still exists Chunks to transfer) or is considered completed (if there is no Chunks available to transfer). On scatter-gather transfer mode, the client will submit a scatter-gather list of n (on this case 130) elements, that will be divide in multiple Chunks, each Chunk will have (on this case 42) a limited number of Bursts and after transferring all Bursts, an interrupt will be triggered, which will allow to recycle the all linked list dedicated memory again with the new information relative to the next Chunk and respective Burst associated and repeat the whole cycle again. On cyclic transfer mode, the client will submit a buffer pointer, length of it and number of repetitions, in this case each burst will correspond directly to each repetition. Each Burst can describes a data transfer from point A(source) to point B(destination) with a length that can be from 1 byte up to 4 GB. Since dedicated the memory space where the linked list will reside is limited, the whole n burst elements will be organized in several Chunks, that will be used later to recycle the dedicated memory space to initiate a new sequence of data transfers. The whole transfer is considered has completed when it was transferred all bursts. Currently this IP has a set well-known register map, which includes support for legacy and unroll modes. Legacy mode is version of this register map that has multiplexer register that allows to switch registers between all write and read channels and the unroll modes repeats all write and read channels registers with an offset between them. This register map is called v0. The IP team is creating a new register map more suitable to the latest PCIe features, that very likely will change the map register, which this version will be called v1. As soon as this new version is released by the IP team the support for this version in be included on this driver. According to the logic, patches 1, 2 and 3 should be squashed into 1 unique patch, but for the sake of simplicity of review, it was divided in this 3 patches files. Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Cc: Vinod Koul <vkoul@kernel.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Russell King <rmk+kernel@armlinux.org.uk> Cc: Joao Pinto <jpinto@synopsys.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
- 07 Jun, 2019 2 commits
-
-
Long Cheng authored
The filename matches mtk-uart-apdma.c. So using "mtk-uart-apdma.txt" should be better. And add some property. Signed-off-by: Long Cheng <long.cheng@mediatek.com> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Long Cheng authored
Add 8250 UART APDMA to support MediaTek UART. If MediaTek UART is enabled by SERIAL_8250_MT6577, and we can enable this driver to offload the UART device moving bytes. Signed-off-by: Long Cheng <long.cheng@mediatek.com> Signed-off-by: Sean Wang <sean.wang@mediatek.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
- 04 Jun, 2019 8 commits
-
-
Jernej Skrabec authored
H6 DMA has more than 32 supported DRQs, which means that configuration register is slightly rearranged. It also needs additional clock to be enabled. Add support for it. Signed-off-by: Jernej Skrabec <jernej.skrabec@siol.net> Signed-off-by: Clément Péron <peron.clem@gmail.com> Acked-by: Maxime Ripard <maxime.ripard@bootlin.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Jernej Skrabec authored
H6 DMA has mode fields in different position than any other currently supported DMA controller. Add a quirk for that. Signed-off-by: Jernej Skrabec <jernej.skrabec@siol.net> Signed-off-by: Clément Péron <peron.clem@gmail.com> Acked-by: Maxime Ripard <maxime.ripard@bootlin.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Jernej Skrabec authored
H6 DMA has more than 32 possible DRQs. That means that current maximum of 31 DRQs is not enough anymore. Add a quirk which will set source and destination DRQ number. Signed-off-by: Jernej Skrabec <jernej.skrabec@siol.net> Signed-off-by: Clément Péron <peron.clem@gmail.com> Acked-by: Maxime Ripard <maxime.ripard@bootlin.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Jernej Skrabec authored
H6 DMA controller needs additional mbus clock to be enabled. Add a quirk for it and handle it accordingly. Signed-off-by: Jernej Skrabec <jernej.skrabec@siol.net> Signed-off-by: Clément Péron <peron.clem@gmail.com> Acked-by: Maxime Ripard <maxime.ripard@bootlin.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Jernej Skrabec authored
DMA in H6 is similar to other DMA controller, except it is first which supports more than 32 request sources and has 16 channels. It also needs additional clock to be enabled. Signed-off-by: Jernej Skrabec <jernej.skrabec@siol.net> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Clément Péron <peron.clem@gmail.com> Acked-by: Maxime Ripard <maxime.ripard@bootlin.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Dmitry Osipenko authored
Apparently driver was never tested with DMA_PREP_INTERRUPT flag being unset since it completely disables interrupt handling instead of skipping the callbacks invocations, hence putting channel into unusable state. The flag is always set by all of kernel drivers that use APB DMA, so let's error out in otherwise case for consistency. It won't be difficult to support that case properly if ever will be needed. Signed-off-by: Dmitry Osipenko <digetx@gmail.com> Acked-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Peng Ma authored
When an error occurs we should clean the error register then to return Signed-off-by: Peng Ma <peng.ma@nxp.com> [vkoul: change patch title] Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Peng Ma authored
CMD of Source/Destination descriptor format should be lower of struct fsl_qdma_engine number data address. Signed-off-by: Peng Ma <peng.ma@nxp.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
- 27 May, 2019 12 commits
-
-
Alexandru Ardelean authored
The `copy_align` property is a generic property that describes alignment for DMA memcpy & sg ops. It serves mostly an informational purpose, and can be used in DMA tests, to pass the info to know what alignment to expect. Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Lars-Peter Clausen authored
Starting with version 4.1.a the AXI-DMAC is capable of reporting the required length alignment. The LSBs that are required to be set for alignment will always read back as set from the transfer length register. It is not possible to clear them by writing a 0. This means the driver can discover the length alignment requirement by writing 0 to that register and reading back the value. Since the DMA will support length alignment requirements that are different from the address alignment requirement track both of them independently. For older versions of the peripheral assume that the length alignment requirement is equal to the address alignment requirement. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Alexandru Ardelean authored
The AXI HDL cores provided for Analog Devices reference designs all share some common base registers (e.g. version register at address 0x00). To reduce duplication for this, a common header is added to define these registers as well as bitfields & macros to work with these registers. Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Weitao Hou authored
Use to_platform_device() instead of open-coding it. Signed-off-by: Weitao Hou <houweitaoo@gmail.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Baolin Wang authored
Let the DMA engine core do the device node validation instead of drivers. Signed-off-by: Baolin Wang <baolin.wang@linaro.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Baolin Wang authored
Let the DMA engine core do the device node validation instead of drivers. Signed-off-by: Baolin Wang <baolin.wang@linaro.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Baolin Wang authored
Let the DMA engine core do the device node validation instead of drivers. Signed-off-by: Baolin Wang <baolin.wang@linaro.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Baolin Wang authored
Let the DMA engine core do the device node validation instead of drivers. Signed-off-by: Baolin Wang <baolin.wang@linaro.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Baolin Wang authored
Let the DMA engine core do the device node validation instead of drivers. Signed-off-by: Baolin Wang <baolin.wang@linaro.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Baolin Wang authored
Let the DMA engine core do the device node validation instead of drivers. Signed-off-by: Baolin Wang <baolin.wang@linaro.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Baolin Wang authored
The __dma_request_channel() prototype has been changed to help to do device node validation, thus we can use dma_request_channel() instead of __dma_request_channel() to keep kernel bisectable. Signed-off-by: Baolin Wang <baolin.wang@linaro.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Baolin Wang authored
When user try to request one DMA channel by __dma_request_channel(), it won't validate if it is the correct DMA device to request, that will lead each DMA engine driver to validate the correct device node in their filter function if it is necessary. Thus we can add the matching device node validation in the DMA engine core, to remove all of device node validation in the drivers. Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Baolin Wang <baolin.wang@linaro.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
- 22 May, 2019 1 commit
-
-
Vinod Koul authored
We get a compiler warn about variable ‘tail_desc’ set but not used drivers/dma/xilinx/xilinx_dma.c:1102:42: warning: variable ‘tail_desc’ set but not used [-Wunused-but-set-variable] struct xilinx_dma_tx_descriptor *desc, *tail_desc; So remove it. Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
- 21 May, 2019 6 commits
-
-
Lars-Peter Clausen authored
The AXI-DMAC supports different types of interface for the data source and destination ports. Typically one of those ports is a memory-mapped interface while the other is some kind of streaming interface. The information about which kind of interface is used for each port is encoded in the devicetree. It is also possible in the driver to detect whether a port supports memory-mapped transfers or not. For streaming interfaces the address register is read-only and will always return 0. So in order to check if a port supports memory-mapped transfers write a non-zero value to the corresponding address register and check that the value read-back is still non zero. This allows to detect mismatches between the devicetree description and the actual hardware configuration. Unfortunately it is not possible to autodetect the interface types since there is no method to distinguish between the different streaming ports. So the best thing that can be done is to error out when a memory mapped port is described in the devicetree but none is detected in the hardware. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Michael Hennerich authored
The TLAST flag is used by the DMAC HDL controller to signal to the controller that the following segment (to be submitted) is the last one (in a series of segments). A receiver DMA (typically another DMAC) can read this parameter (from the transfer), and terminate the transfer earlier. A typical use-case for this, is when the receiver expects a certain amount of segments, but for some reason (e.g. an ADC capture which can have an unknown number of digital samples) the number of actual segments is smaller. The receiver would read this flag, and then the DMAC would finish. Signed-off-by: Michael Hennerich <michael.hennerich@analog.com> Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Dragos Bogdan authored
The DMAC HDL core supports interleaved & cyclic transfers. An example use-case for this mode is when the controller is used as a video DMA. This change sets the `cyclic` field to true, so that when the IRQ comes and the `axi_dmac_transfer_done()` callback is called (from the interrupt handler) the proper `vchan_cyclic_callback()` is called. This way the DMAEngine framework will process data correctly for interleaved + cyclic transfers. This doesn't fix anything. It's an enhancement to the driver. Signed-off-by: Dragos Bogdan <dragos.bogdan@analog.com> Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Amelie Delaunay authored
Commit c6504be5 ("dmaengine: stm32-dma: Fix unsigned variable compared with zero") duplicated the call to platform_get_irq. So remove the first call to platform_get_irq. Fixes: c6504be5 ("dmaengine: stm32-dma: Fix unsigned variable compared with zero") Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Paul Cercueil authored
Use SPDX license notifier instead of plain text in the header. Signed-off-by: Paul Cercueil <paul@crapouillou.net> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Simon Horman authored
SUDMAC driver was introduced in v3.10 but was never integrated for use by any platform. As it is unused remove it. Signed-off-by: Simon Horman <horms+renesas@verge.net.au> Acked-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
- 19 May, 2019 3 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubifsLinus Torvalds authored
Pull UBIFS fixes from Richard Weinberger: - build errors wrt xattrs - mismerge which lead to a wrong Kconfig ifdef - missing endianness conversion * tag 'upstream-5.2-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubifs: ubifs: Convert xattr inum to host order ubifs: Use correct config name for encryption ubifs: Fix build error without CONFIG_UBIFS_FS_XATTR
-
Linus Torvalds authored
Merge yet more updates from Andrew Morton: "A few final bits: - large changes to vmalloc, yielding large performance benefits - tweak the console-flush-on-panic code - a few fixes" * emailed patches from Andrew Morton <akpm@linux-foundation.org>: panic: add an option to replay all the printk message in buffer initramfs: don't free a non-existent initrd fs/writeback.c: use rcu_barrier() to wait for inflight wb switches going into workqueue when umount mm/compaction.c: correct zone boundary handling when isolating pages from a pageblock mm/vmap: add DEBUG_AUGMENT_LOWEST_MATCH_CHECK macro mm/vmap: add DEBUG_AUGMENT_PROPAGATE_CHECK macro mm/vmalloc.c: keep track of free blocks for vmap allocation
-