- 25 Feb, 2019 9 commits
-
-
Alexandru Ardelean authored
This patch starts to take advantage of the `dmatest_data` struct by moving the common allocation & free-ing bits into functions. Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Alexandru Ardelean authored
This is just a cosmetic change, since this variable gets used quite a bit inside the dmatest_func() routine. Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Alexandru Ardelean authored
This change wraps the data for the source & destination buffers into a `struct dmatest_data`. The rename patterns are: * src_cnt -> src->cnt * dst_cnt -> dst->cnt * src_off -> src->off * dst_off -> dst->off * thread->srcs -> src->aligned * thread->usrcs -> src->raw * thread->dsts -> dst->aligned * thread->udsts -> dst->raw The intent is to make a function that moves duplicate parts of the code into common alloc & free functions, which will unclutter the `dmatest_func()` function. Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
IOATDMA 3.4 supports PCIe LTR mechanism. The registers are non-standard PCIe LTR support. This needs to be setup in order to not suffer performance impact and provide proper power management. The channel is set to active when it is allocated, and to passive when it's freed. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
Adding support for new feature on ioatdma 3.4 hardware that provides descriptor pre-fetching in order to reduce small DMA latencies. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
IOATDMA v3.4 does not support DCA. Disable Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Dave Jiang authored
Add Snowridge Xeon-D ioatdma PCI device id. Also applies for Icelake SP Xeon. This introduces ioatdma v3.4 platform. Also bumping driver version to 5.0 since we are adding additional code for 3.4 support. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Baolin Wang authored
We will describe the slave id in DMA cell specifier instead of DMA channel id, thus we should save the slave id from DMA engine translation function, and remove the channel id validation. Meanwhile we do not need set default slave id in sprd_dma_alloc_chan_resources(), remove it. Signed-off-by: Baolin Wang <baolin.wang@linaro.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Baolin Wang authored
For Spreadtrum DMA engine, all channels are equal, which means slave can request any channels with setting a unique slave id to trigger this channel. Thus we can remove the channel id from device tree to assign the channel dynamically, moreover we should add the slave id in device tree. Signed-off-by: Baolin Wang <baolin.wang@linaro.org> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
- 19 Feb, 2019 1 commit
-
-
Robin Murphy authored
Using dma_dev->dev for mappings before it's assigned with the correct device is unlikely to work as expected, and with future dma-direct changes, passing a NULL device may end up crashing entirely. I don't know enough about this hardware or the mv_xor_prep_dma_interrupt() operation to implement the appropriate error-handling logic that would have revealed those dma_map_single() calls failing on arm64 for as long as the driver has been enabled there, but moving the assignment earlier will at least make the current code operate as intended. Fixes: 22843545 ("dma: mv_xor: Add support for DMA_INTERRUPT") Reported-by: John David Anglin <dave.anglin@bell.net> Tested-by: John David Anglin <dave.anglin@bell.net> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Tested-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
- 11 Feb, 2019 2 commits
-
-
Federico Vaga authored
It clarifies that the DMA description pointer returned by `dmaengine_prep_*` function should not be used after submission. Signed-off-by: Federico Vaga <federico.vaga@cern.ch> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Randy Dunlap authored
Fix markup warning: insert a blank line before the hint. Documentation/driver-api/dmaengine/dmatest.rst:63: WARNING: Unexpected indentation. Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
- 20 Jan, 2019 5 commits
-
-
Gustavo A. R. Silva authored
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Gustavo A. R. Silva authored
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Vinod Koul authored
-
Shunyong Yang authored
When dma_cookie_complete() is called in hidma_process_completed(), dma_cookie_status() will return DMA_COMPLETE in hidma_tx_status(). Then, hidma_txn_is_success() will be called to use channel cookie mchan->last_success to do additional DMA status check. Current code assigns mchan->last_success after dma_cookie_complete(). This causes a race condition of dma_cookie_status() returns DMA_COMPLETE before mchan->last_success is assigned correctly. The race will cause hidma_tx_status() return DMA_ERROR but the transaction is actually a success. Moreover, in async_tx case, it will cause a timeout panic in async_tx_quiesce(). Kernel panic - not syncing: async_tx_quiesce: DMA error waiting for transaction ... Call trace: [<ffff000008089994>] dump_backtrace+0x0/0x1f4 [<ffff000008089bac>] show_stack+0x24/0x2c [<ffff00000891e198>] dump_stack+0x84/0xa8 [<ffff0000080da544>] panic+0x12c/0x29c [<ffff0000045d0334>] async_tx_quiesce+0xa4/0xc8 [async_tx] [<ffff0000045d03c8>] async_trigger_callback+0x70/0x1c0 [async_tx] [<ffff0000048b7d74>] raid_run_ops+0x86c/0x1540 [raid456] [<ffff0000048bd084>] handle_stripe+0x5e8/0x1c7c [raid456] [<ffff0000048be9ec>] handle_active_stripes.isra.45+0x2d4/0x550 [raid456] [<ffff0000048beff4>] raid5d+0x38c/0x5d0 [raid456] [<ffff000008736538>] md_thread+0x108/0x168 [<ffff0000080fb1cc>] kthread+0x10c/0x138 [<ffff000008084d34>] ret_from_fork+0x10/0x18 Cc: Joey Zheng <yu.zheng@hxt-semitech.com> Reviewed-by: Sinan Kaya <okaya@kernel.org> Signed-off-by: Shunyong Yang <shunyong.yang@hxt-semitech.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Shunyong Yang authored
In async_tx_test_ack(), it uses flags in struct dma_async_tx_descriptor to check the ACK status. As hidma reuses the descriptor in a free list when hidma_prep_dma_*(memcpy/memset) is called, the flag will keep ACKed if the descriptor has been used before. This will cause a BUG_ON in async_tx_quiesce(). kernel BUG at crypto/async_tx/async_tx.c:282! Internal error: Oops - BUG: 0 1 SMP ... task: ffff8017dd3ec000 task.stack: ffff8017dd3e8000 PC is at async_tx_quiesce+0x54/0x78 [async_tx] LR is at async_trigger_callback+0x98/0x110 [async_tx] This patch initializes flags in dma_async_tx_descriptor by the flags passed from the caller when hidma_prep_dma_*(memcpy/memset) is called. Cc: Joey Zheng <yu.zheng@hxt-semitech.com> Reviewed-by: Sinan Kaya <okaya@kernel.org> Signed-off-by: Shunyong Yang <shunyong.yang@hxt-semitech.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
- 07 Jan, 2019 13 commits
-
-
Gustavo A. R. Silva authored
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Gustavo A. R. Silva authored
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Acked-by: Patrice Chotard <patrice.chotard@st.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Gustavo A. R. Silva authored
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = devm_kzalloc(dev, sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = devm_kzalloc(dev, struct_size(instance, entry, count), GFP_KERNEL); This issue was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Gustavo A. R. Silva authored
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Aditya Pakki authored
While initializing the driver, the function platform_driver_register can fail and return an error. Consistent with other invocations, this patch returns the error upstream. Signed-off-by: Aditya Pakki <pakki001@umn.edu> Acked-by: Sinan Kaya <okaya@kernel.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Julia Lawall authored
Drop LIST_HEAD where the variable it declares has never been used. The semantic patch that fixes this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ identifier x; @@ - LIST_HEAD(x); ... when != x // </smpl> Fixes: 4a533218 ("dmaengine: sa11x0: Split device_control") Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Julia Lawall authored
Drop LIST_HEAD where the variable it declares is never used. The variable has not been used since the function was introduced in 740aa957 ("dmaengine: pl330: Split device_control"). The semantic patch that fixes this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ identifier x; @@ - LIST_HEAD(x); ... when != x // </smpl> Fixes: 740aa957 ("dmaengine: pl330: Split device_control") Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Julia Lawall authored
Drop LIST_HEAD where the variable it declares is never used. The declarations were introduced with the file, but the declared variables were not used. The semantic patch that fixes this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ identifier x; @@ - LIST_HEAD(x); ... when != x // </smpl> Fixes: 6b4cd727 ("dmaengine: st_fdma: Add STMicroelectronics FDMA engine driver support") Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Julia Lawall authored
Drop LIST_HEAD where the variable it declares is never used. Commit ab703f81 ("dmaengine: dw: lazy allocation of dma descriptors") removed the uses, but not the declaration. The semantic patch that fixes this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ identifier x; @@ - LIST_HEAD(x); ... when != x // </smpl> Fixes: ab703f81 ("dmaengine: dw: lazy allocation of dma descriptors") Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Julia Lawall authored
Drop LIST_HEAD where the variable it declares is never used. tmp_list has been declared since the introduction of the driver and has never been used. The two declarations of list were introduced with the containing functions but were also not used. The semantic patch that fixes this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ identifier x; @@ - LIST_HEAD(x); ... when != x // </smpl> Fixes: dc78baa2 ("dmaengine: at_hdmac: new driver for the Atmel AHB DMA Controller") Fixes: 4facfe7f ("dmaengine: hdmac: Split device_control") Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Vinod Koul <vkoul@kernel.org>
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuildLinus Torvalds authored
Pull more Kbuild updates from Masahiro Yamada: - improve boolinit.cocci and use_after_iter.cocci semantic patches - fix alignment for kallsyms - move 'asm goto' compiler test to Kconfig and clean up jump_label CONFIG option - generate asm-generic wrappers automatically if arch does not implement mandatory UAPI headers - remove redundant generic-y defines - misc cleanups * tag 'kbuild-v4.21-3' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: kconfig: rename generated .*conf-cfg to *conf-cfg kbuild: remove unnecessary stubs for archheader and archscripts kbuild: use assignment instead of define ... endef for filechk_* rules arch: remove redundant UAPI generic-y defines kbuild: generate asm-generic wrappers if mandatory headers are missing arch: remove stale comments "UAPI Header export list" riscv: remove redundant kernel-space generic-y kbuild: change filechk to surround the given command with { } kbuild: remove redundant target cleaning on failure kbuild: clean up rule_dtc_dt_yaml kbuild: remove UIMAGE_IN and UIMAGE_OUT jump_label: move 'asm goto' support test to Kconfig kallsyms: lower alignment on ARM scripts: coccinelle: boolinit: drop warnings on named constants scripts: coccinelle: check for redeclaration kconfig: remove unused "file" field of yylval union nds32: remove redundant kernel-space generic-y nios2: remove unneeded HAS_DMA define
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull perf tooling updates form Ingo Molnar: "A final batch of perf tooling changes: mostly fixes and small improvements" * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (29 commits) perf session: Add comment for perf_session__register_idle_thread() perf thread-stack: Fix thread stack processing for the idle task perf thread-stack: Allocate an array of thread stacks perf thread-stack: Factor out thread_stack__init() perf thread-stack: Allow for a thread stack array perf thread-stack: Avoid direct reference to the thread's stack perf thread-stack: Tidy thread_stack__bottom() usage perf thread-stack: Simplify some code in thread_stack__process() tools gpio: Allow overriding CFLAGS tools power turbostat: Override CFLAGS assignments and add LDFLAGS to build command tools thermal tmon: Allow overriding CFLAGS assignments tools power x86_energy_perf_policy: Override CFLAGS assignments and add LDFLAGS to build command perf c2c: Increase the HITM ratio limit for displayed cachelines perf c2c: Change the default coalesce setup perf trace beauty ioctl: Beautify USBDEVFS_ commands perf trace beauty: Export function to get the files for a thread perf trace: Wire up ioctl's USBDEBFS_ cmd table generator perf beauty ioctl: Add generator for USBDEVFS_ ioctl commands tools headers uapi: Grab a copy of usbdevice_fs.h perf trace: Store the major number for a file when storing its pathname ...
-
- 06 Jan, 2019 10 commits
-
-
Linus Torvalds authored
The semantics of what "in core" means for the mincore() system call are somewhat unclear, but Linux has always (since 2.3.52, which is when mincore() was initially done) treated it as "page is available in page cache" rather than "page is mapped in the mapping". The problem with that traditional semantic is that it exposes a lot of system cache state that it really probably shouldn't, and that users shouldn't really even care about. So let's try to avoid that information leak by simply changing the semantics to be that mincore() counts actual mapped pages, not pages that might be cheaply mapped if they were faulted (note the "might be" part of the old semantics: being in the cache doesn't actually guarantee that you can access them without IO anyway, since things like network filesystems may have to revalidate the cache before use). In many ways the old semantics were somewhat insane even aside from the information leak issue. From the very beginning (and that beginning is a long time ago: 2.3.52 was released in March 2000, I think), the code had a comment saying Later we can get more picky about what "in core" means precisely. and this is that "later". Admittedly it is much later than is really comfortable. NOTE! This is a real semantic change, and it is for example known to change the output of "fincore", since that program literally does a mmmap without populating it, and then doing "mincore()" on that mapping that doesn't actually have any pages in it. I'm hoping that nobody actually has any workflow that cares, and the info leak is real. We may have to do something different if it turns out that people have valid reasons to want the old semantics, and if we can limit the information leak sanely. Cc: Kevin Easton <kevin@guarana.org> Cc: Jiri Kosina <jikos@kernel.org> Cc: Masatake YAMATO <yamato@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Linus Torvalds authored
Commit 594cc251 ("make 'user_access_begin()' do 'access_ok()'") broke both alpha and SH booting in qemu, as noticed by Guenter Roeck. It turns out that the bug wasn't actually in that commit itself (which would have been surprising: it was mostly a no-op), but in how the addition of access_ok() to the strncpy_from_user() and strnlen_user() functions now triggered the case where those functions would test the access of the very last byte of the user address space. The string functions actually did that user range test before too, but they did it manually by just comparing against user_addr_max(). But with user_access_begin() doing the check (using "access_ok()"), it now exposed problems in the architecture implementations of that function. For example, on alpha, the access_ok() helper macro looked like this: #define __access_ok(addr, size) \ ((get_fs().seg & (addr | size | (addr+size))) == 0) and what it basically tests is of any of the high bits get set (the USER_DS masking value is 0xfffffc0000000000). And that's completely wrong for the "addr+size" check. Because it's off-by-one for the case where we check to the very end of the user address space, which is exactly what the strn*_user() functions do. Why? Because "addr+size" will be exactly the size of the address space, so trying to access the last byte of the user address space will fail the __access_ok() check, even though it shouldn't. As a result, the user string accessor functions failed consistently - because they literally don't know how long the string is going to be, and the max access is going to be that last byte of the user address space. Side note: that alpha macro is buggy for another reason too - it re-uses the arguments twice. And SH has another version of almost the exact same bug: #define __addr_ok(addr) \ ((unsigned long __force)(addr) < current_thread_info()->addr_limit.seg) so far so good: yes, a user address must be below the limit. But then: #define __access_ok(addr, size) \ (__addr_ok((addr) + (size))) is wrong with the exact same off-by-one case: the case when "addr+size" is exactly _equal_ to the limit is actually perfectly fine (think "one byte access at the last address of the user address space") The SH version is actually seriously buggy in another way: it doesn't actually check for overflow, even though it did copy the _comment_ that talks about overflow. So it turns out that both SH and alpha actually have completely buggy implementations of access_ok(), but they happened to work in practice (although the SH overflow one is a serious serious security bug, not that anybody likely cares about SH security). This fixes the problems by using a similar macro on both alpha and SH. It isn't trying to be clever, the end address is based on this logic: unsigned long __ao_end = __ao_a + __ao_b - !!__ao_b; which basically says "add start and length, and then subtract one unless the length was zero". We can't subtract one for a zero length, or we'd just hit an underflow instead. For a lot of access_ok() users the length is a constant, so this isn't actually as expensive as it initially looks. Reported-and-tested-by: Guenter Roeck <linux@roeck-us.net> Cc: Matt Turner <mattst88@gmail.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/fscryptLinus Torvalds authored
Pull fscrypt updates from Ted Ts'o: "Add Adiantum support for fscrypt" * tag 'fscrypt_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/fscrypt: fscrypt: add Adiantum support
-
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4Linus Torvalds authored
Pull ext4 bug fixes from Ted Ts'o: "Fix a number of ext4 bugs" * tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: ext4: fix special inode number checks in __ext4_iget() ext4: track writeback errors using the generic tracking infrastructure ext4: use ext4_write_inode() when fsyncing w/o a journal ext4: avoid kernel warning when writing the superblock to a dead device ext4: fix a potential fiemap/page fault deadlock w/ inline_data ext4: make sure enough credits are reserved for dioread_nolock writes
-
git://git.infradead.org/users/hch/dma-mappingLinus Torvalds authored
Pull dma-mapping fixes from Christoph Hellwig: "Fix various regressions introduced in this cycles: - fix dma-debug tracking for the map_page / map_single consolidatation - properly stub out DMA mapping symbols for !HAS_DMA builds to avoid link failures - fix AMD Gart direct mappings - setup the dma address for no kernel mappings using the remap allocator" * tag 'dma-mapping-4.21-1' of git://git.infradead.org/users/hch/dma-mapping: dma-direct: fix DMA_ATTR_NO_KERNEL_MAPPING for remapped allocations x86/amd_gart: fix unmapping of non-GART mappings dma-mapping: remove a few unused exports dma-mapping: properly stub out the DMA API for !CONFIG_HAS_DMA dma-mapping: remove dmam_{declare,release}_coherent_memory dma-mapping: implement dmam_alloc_coherent using dmam_alloc_attrs dma-mapping: implement dma_map_single_attrs using dma_map_page_attrs
-
Linus Torvalds authored
Merge tag 'tag-chrome-platform-for-v4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/bleung/chrome-platform Pull chrome platform updates from Benson Leung: - Changes for EC_MKBP_EVENT_SENSOR_FIFO handling. - Also, maintainership changes. Olofj out, Enric balletbo in. * tag 'tag-chrome-platform-for-v4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/bleung/chrome-platform: MAINTAINERS: add maintainers for ChromeOS EC sub-drivers MAINTAINERS: platform/chrome: Add Enric as a maintainer MAINTAINERS: platform/chrome: remove myself as maintainer platform/chrome: don't report EC_MKBP_EVENT_SENSOR_FIFO as wakeup platform/chrome: straighten out cros_ec_get_{next,host}_event() error codes
-
git://github.com/andersson/remoteprocLinus Torvalds authored
Pull hwspinlock updates from Bjorn Andersson: "This adds support for the hardware semaphores found in STM32MP1" * tag 'hwlock-v4.21' of git://github.com/andersson/remoteproc: hwspinlock: fix return value check in stm32_hwspinlock_probe() hwspinlock: add STM32 hwspinlock device dt-bindings: hwlock: Document STM32 hwspinlock bindings
-
Eric Biggers authored
Add support for the Adiantum encryption mode to fscrypt. Adiantum is a tweakable, length-preserving encryption mode with security provably reducible to that of XChaCha12 and AES-256, subject to a security bound. It's also a true wide-block mode, unlike XTS. See the paper "Adiantum: length-preserving encryption for entry-level processors" (https://eprint.iacr.org/2018/720.pdf) for more details. Also see commit 059c2a4d ("crypto: adiantum - add Adiantum support"). On sufficiently long messages, Adiantum's bottlenecks are XChaCha12 and the NH hash function. These algorithms are fast even on processors without dedicated crypto instructions. Adiantum makes it feasible to enable storage encryption on low-end mobile devices that lack AES instructions; currently such devices are unencrypted. On ARM Cortex-A7, on 4096-byte messages Adiantum encryption is about 4 times faster than AES-256-XTS encryption; decryption is about 5 times faster. In fscrypt, Adiantum is suitable for encrypting both file contents and names. With filenames, it fixes a known weakness: when two filenames in a directory share a common prefix of >= 16 bytes, with CTS-CBC their encrypted filenames share a common prefix too, leaking information. Adiantum does not have this problem. Since Adiantum also accepts long tweaks (IVs), it's also safe to use the master key directly for Adiantum encryption rather than deriving per-file keys, provided that the per-file nonce is included in the IVs and the master key isn't used for any other encryption mode. This configuration saves memory and improves performance. A new fscrypt policy flag is added to allow users to opt-in to this configuration. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
-
git://git.lwn.net/linuxLinus Torvalds authored
Pull documentation fixes from Jonathan Corbet: "A handful of late-arriving documentation fixes" * tag 'docs-5.0-fixes' of git://git.lwn.net/linux: doc: filesystems: fix bad references to nonexistent ext4.rst file Documentation/admin-guide: update URL of LKML information link Docs/kernel-api.rst: Remove blk-tag.c reference
-
git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394Linus Torvalds authored
Pull firewire fixlet from Stefan Richter: "Remove an explicit dependency in Kconfig which is implied by another dependency" * tag 'firewire-update' of git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394: firewire: Remove depends on HAS_DMA in case of platform dependency
-