- 10 Nov, 2016 38 commits
-
-
Aneesh Kumar K.V authored
commit bd77c449 upstream. Before this patch, we used tlbiel, if we ever ran only on this core. That was mostly derived from the nohash usage of the same. But is incorrect, the ISA 3.0 clarifies tlbiel such that: "All TLB entries that have all of the following properties are made invalid on the thread executing the tlbiel instruction" ie. tlbiel only invalidates TLB entries on the current thread. So if the mm has been used on any other thread (aka. cpu) then we must broadcast the invalidate. This bug could lead to invalid TLB entries if a program runs on multiple threads of a core. Hence use tlbiel, if we only ever ran on only the current cpu. Fixes: 1a472c9d ("powerpc/mm/radix: Add tlbflush routines") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Segher Boessenkool authored
commit 80f23935 upstream. PowerPC's "cmp" instruction has four operands. Normally people write "cmpw" or "cmpd" for the second cmp operand 0 or 1. But, frequently people forget, and write "cmp" with just three operands. With older binutils this is silently accepted as if this was "cmpw", while often "cmpd" is wanted. With newer binutils GAS will complain about this for 64-bit code. For 32-bit code it still silently assumes "cmpw" is what is meant. In this instance the code comes directly from ISA v2.07, including the cmp, but cmpd is correct. Backport to stable so that new toolchains can build old kernels. Fixes: 948cf67c ("powerpc: Add NAP mode support on Power7 in HV mode") Reviewed-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Signed-off-by: Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Chris Mason authored
commit 570dd450 upstream. btrfs_remove_all_log_ctxs takes a shortcut where it avoids walking the list because it knows all of the waiters are patiently waiting for the commit to finish. But, there's a small race where btrfs_sync_log can remove itself from the list if it finds a log commit is already done. Also, it uses list_del_init() to remove itself from the list, but there's no way to know if btrfs_remove_all_log_ctxs has already run, so we don't know for sure if it is safe to call list_del_init(). This gets rid of all the shortcuts for btrfs_remove_all_log_ctxs(), and just calls it with the proper locking. This is part two of the corruption fixed by cbd60aa7. I should have done this in the first place, but convinced myself the optimizations were safe. A 12 hour run of dbench 2048 will eventually trigger a list debug WARN_ON for the list_del_init() in btrfs_sync_log(). Fixes: d1433debReported-by: Dave Jones <davej@codemonkey.org.uk> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vaibhav Jain authored
commit a05b82d5 upstream. In some error paths in functions cxl_start_context and afu_ioctl_start_work pid references to the current & group-leader tasks can leak after they are taken. This patch fixes these error paths to release these pid references before exiting the error path. Fixes: 7b8ad495 ("cxl: Fix DSI misses when the context owning task exits") Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Reported-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com> Signed-off-by: Vaibhav Jain <vaibhav@linux.vnet.ibm.com> Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Arve Hjønnevåg authored
commit 4afb604e upstream. Prevents leaking pointers between processes Signed-off-by: Arve Hjønnevåg <arve@android.com> Signed-off-by: Martijn Coenen <maco@android.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Arve Hjønnevåg authored
commit 0a3ffab9 upstream. Prevent using a binder_ref with only weak references where a strong reference is required. Signed-off-by: Arve Hjønnevåg <arve@android.com> Signed-off-by: Martijn Coenen <maco@android.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hui Wang authored
commit 6aecd871 upstream. They uses the codec ALC255, and have the different pin cfg definition from the ones in the existing pin quirk table. Now adding them into the table to fix the problem. Signed-off-by: Hui Wang <hui.wang@canonical.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit 1a3f0991 upstream. ASRock B150M Pro4/D3 mobo with ALC892 codec doesn't seem to provide proper pins for the surround outputs, hence we need to specify the pincfgs manually with a couple of other corrections. Reported-and-tested-by: Benjamin Valentin <benpicco@googlemail.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hui Wang authored
commit f771d5bb upstream. We have a new Dell laptop model which uses ALC295, the pin definition is different from the existing ones in the pin quirk table, to fix the headset mic detection and mic mute led's problem, we need to add the new pin defintion into the pin quirk table. Signed-off-by: Hui Wang <hui.wang@canonical.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ard Biesheuvel authored
commit 3ab7511e upstream. Commit 49d9e77e ("ALSA: hda - Fix system panic when DMA > 40 bits for Nvidia audio controllers") simply disabled any DMA exceeding 32 bits for NVidia devices, even though they are capable of performing DMA up to 40 bits. On some architectures (such as arm64), system memory is not guaranteed to be 32-bit addressable by PCI devices, and so this change prevents NVidia devices from working on platforms such as AMD Seattle. Since the original commit already mentioned that up to 40 bits of DMA is supported, and given that the code has been updated in the meantime to support a 40 bit DMA mask on other devices, revert commit 49d9e77e and explicitly set the DMA mask to 40 bits for NVidia devices. Fixes: 49d9e77e ('ALSA: hda - Fix system panic when DMA > 40 bits...') Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit 9b50898a upstream. The recent rewrite of the sequencer time accounting using timespec64 in the commit [3915bf29: ALSA: seq_timer: use monotonic times internally] introduced a bad regression. Namely, the time reported back doesn't increase but goes back and forth. The culprit was obvious: the delta is stored to the result (cur_time = delta), instead of adding the delta (cur_time += delta)! Let's fix it. Fixes: 3915bf29 ('ALSA: seq_timer: use monotonic times internally') Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=177571Reported-by: Yves Guillemot <yc.guillemot@wanadoo.fr> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Marcel Hasler authored
commit bdc3478f upstream. The stk1160 chip needs QUIRK_AUDIO_ALIGN_TRANSFER. This patch resolves the issue reported on the mailing list (http://marc.info/?l=linux-sound&m=139223599126215&w=2) and also fixes bug 180071 (https://bugzilla.kernel.org/show_bug.cgi?id=180071). Signed-off-by: Marcel Hasler <mahasler@gmail.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dan Williams authored
commit 52e73eb2 upstream. We need to wait until the percpu_ref is released before exit. Otherwise, we sometimes lose the race and trigger this new warning that was added in v4.9 (commit a67823c1 "percpu-refcount: init ->confirm_switch member properly"): WARNING: CPU: 0 PID: 3629 at lib/percpu-refcount.c:107 percpu_ref_exit+0x51/0x60 [..] Call Trace: [<ffffffff814bf093>] dump_stack+0x85/0xc2 [<ffffffff810b15db>] __warn+0xcb/0xf0 [<ffffffff810b170d>] warn_slowpath_null+0x1d/0x20 [<ffffffff814d70c1>] percpu_ref_exit+0x51/0x60 [<ffffffffa005706a>] dax_pmem_percpu_exit+0x1a/0x50 [dax_pmem] [<ffffffff81615f1f>] devm_action_release+0xf/0x20 Fixes: ab68f262 ("/dev/dax, pmem: direct access to persistent memory") Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Artem Savkov authored
commit 31e6ec45 upstream. Since BIG_KEYS can't be compiled as module it requires one of the "stdrng" providers to be compiled into kernel. Otherwise big_key_crypto_init() fails on crypto_alloc_rng step and next dereference of big_key_skcipher (e.g. in big_key_preparse()) results in a NULL pointer dereference. Fixes: 13100a72 ('Security: Keys: Big keys stored encrypted') Signed-off-by: Artem Savkov <asavkov@redhat.com> Signed-off-by: David Howells <dhowells@redhat.com> cc: Stephan Mueller <smueller@chronox.de> cc: Kirill Marinushkin <k.marinushkin@gmail.com> Signed-off-by: James Morris <james.l.morris@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
David Howells authored
commit 7df3e59c upstream. big_key has two separate initialisation functions, one that registers the key type and one that registers the crypto. If the key type fails to register, there's no problem if the crypto registers successfully because there's no way to reach the crypto except through the key type. However, if the key type registers successfully but the crypto does not, big_key_rng and big_key_blkcipher may end up set to NULL - but the code neither checks for this nor unregisters the big key key type. Furthermore, since the key type is registered before the crypto, it is theoretically possible for the kernel to try adding a big_key before the crypto is set up, leading to the same effect. Fix this by merging big_key_crypto_init() and big_key_init() and calling the resulting function late. If they're going to be encrypted, we shouldn't be creating big_keys before we have the facilities to do the encryption available. The key type registration is also moved after the crypto initialisation. The fix also includes message printing on failure. If the big_key type isn't correctly set up, simply doing: dd if=/dev/zero bs=4096 count=1 | keyctl padd big_key a @s ought to cause an oops. Fixes: 13100a72 ('Security: Keys: Big keys stored encrypted') Signed-off-by: David Howells <dhowells@redhat.com> cc: Peter Hlavaty <zer0mem@yahoo.com> cc: Kirill Marinushkin <k.marinushkin@gmail.com> cc: Artem Savkov <asavkov@redhat.com> Signed-off-by: James Morris <james.l.morris@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
David Howells authored
commit 03dab869 upstream. This fixes CVE-2016-7042. Fix a short sprintf buffer in proc_keys_show(). If the gcc stack protector is turned on, this can cause a panic due to stack corruption. The problem is that xbuf[] is not big enough to hold a 64-bit timeout rendered as weeks: (gdb) p 0xffffffffffffffffULL/(60*60*24*7) $2 = 30500568904943 That's 14 chars plus NUL, not 11 chars plus NUL. Expand the buffer to 16 chars. I think the unpatched code apparently works if the stack-protector is not enabled because on a 32-bit machine the buffer won't be overflowed and on a 64-bit machine there's a 64-bit aligned pointer at one side and an int that isn't checked again on the other side. The panic incurred looks something like: Kernel panic - not syncing: stack-protector: Kernel stack is corrupted in: ffffffff81352ebe CPU: 0 PID: 1692 Comm: reproducer Not tainted 4.7.2-201.fc24.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 0000000000000086 00000000fbbd2679 ffff8800a044bc00 ffffffff813d941f ffffffff81a28d58 ffff8800a044bc98 ffff8800a044bc88 ffffffff811b2cb6 ffff880000000010 ffff8800a044bc98 ffff8800a044bc30 00000000fbbd2679 Call Trace: [<ffffffff813d941f>] dump_stack+0x63/0x84 [<ffffffff811b2cb6>] panic+0xde/0x22a [<ffffffff81352ebe>] ? proc_keys_show+0x3ce/0x3d0 [<ffffffff8109f7f9>] __stack_chk_fail+0x19/0x30 [<ffffffff81352ebe>] proc_keys_show+0x3ce/0x3d0 [<ffffffff81350410>] ? key_validate+0x50/0x50 [<ffffffff8134db30>] ? key_default_cmp+0x20/0x20 [<ffffffff8126b31c>] seq_read+0x2cc/0x390 [<ffffffff812b6b12>] proc_reg_read+0x42/0x70 [<ffffffff81244fc7>] __vfs_read+0x37/0x150 [<ffffffff81357020>] ? security_file_permission+0xa0/0xc0 [<ffffffff81246156>] vfs_read+0x96/0x130 [<ffffffff81247635>] SyS_read+0x55/0xc0 [<ffffffff817eb872>] entry_SYSCALL_64_fastpath+0x1a/0xa4 Reported-by: Ondrej Kozina <okozina@redhat.com> Signed-off-by: David Howells <dhowells@redhat.com> Tested-by: Ondrej Kozina <okozina@redhat.com> Signed-off-by: James Morris <james.l.morris@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Ernst authored
commit 3105f234 upstream. Initial logic for checking CPU match resulted in OR of CPU features rather than the intended AND. Updated to use boot_cpu_has macro rather than x86_match_cpu. In addition, MWAIT is the only required CPU feature for idle injection to work. Drop other feature requirements since they are only needed for optimal efficiency. Signed-off-by: Eric Ernst <eric.ernst@linux.intel.com> Acked-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Zhang Rui <rui.zhang@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johannes Weiner authored
commit 89a28483 upstream. On 4.0, we saw a stack corruption from a page fault entering direct memory cgroup reclaim, calling into btrfs_releasepage(), which then tried to allocate an extent and recursed back into a kmem charge ad nauseam: [...] btrfs_releasepage+0x2c/0x30 try_to_release_page+0x32/0x50 shrink_page_list+0x6da/0x7a0 shrink_inactive_list+0x1e5/0x510 shrink_lruvec+0x605/0x7f0 shrink_zone+0xee/0x320 do_try_to_free_pages+0x174/0x440 try_to_free_mem_cgroup_pages+0xa7/0x130 try_charge+0x17b/0x830 memcg_charge_kmem+0x40/0x80 new_slab+0x2d9/0x5a0 __slab_alloc+0x2fd/0x44f kmem_cache_alloc+0x193/0x1e0 alloc_extent_state+0x21/0xc0 __clear_extent_bit+0x2b5/0x400 try_release_extent_mapping+0x1a3/0x220 __btrfs_releasepage+0x31/0x70 btrfs_releasepage+0x2c/0x30 try_to_release_page+0x32/0x50 shrink_page_list+0x6da/0x7a0 shrink_inactive_list+0x1e5/0x510 shrink_lruvec+0x605/0x7f0 shrink_zone+0xee/0x320 do_try_to_free_pages+0x174/0x440 try_to_free_mem_cgroup_pages+0xa7/0x130 try_charge+0x17b/0x830 mem_cgroup_try_charge+0x65/0x1c0 handle_mm_fault+0x117f/0x1510 __do_page_fault+0x177/0x420 do_page_fault+0xc/0x10 page_fault+0x22/0x30 On later kernels, kmem charging is opt-in rather than opt-out, and that particular kmem allocation in btrfs_releasepage() is no longer being charged and won't recurse and overrun the stack anymore. But it's not impossible for an accounted allocation to happen from the memcg direct reclaim context, and we needed to reproduce this crash many times before we even got a useful stack trace out of it. Like other direct reclaimers, mark tasks in memcg reclaim PF_MEMALLOC to avoid recursing into any other form of direct reclaim. Then let recursive charges from PF_MEMALLOC contexts bypass the cgroup limit. Link: http://lkml.kernel.org/r/20161025141050.GA13019@cmpxchg.orgSigned-off-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Cc: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Joonsoo Kim authored
commit 86d9f485 upstream. There is a bug report that SLAB makes extreme load average due to over 2000 kworker thread. https://bugzilla.kernel.org/show_bug.cgi?id=172981 This issue is caused by kmemcg feature that try to create new set of kmem_caches for each memcg. Recently, kmem_cache creation is slowed by synchronize_sched() and futher kmem_cache creation is also delayed since kmem_cache creation is synchronized by a global slab_mutex lock. So, the number of kworker that try to create kmem_cache increases quietly. synchronize_sched() is for lockless access to node's shared array but it's not needed when a new kmem_cache is created. So, this patch rules out that case. Fixes: 801faf0d ("mm/slab: lockless decision to grow cache") Link: http://lkml.kernel.org/r/1475734855-4837-1-git-send-email-iamjoonsoo.kim@lge.comReported-by: Doug Smythies <dsmythies@telus.net> Tested-by: Doug Smythies <dsmythies@telus.net> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Polakov authored
commit 1bc11d70 upstream. As described in https://bugzilla.kernel.org/show_bug.cgi?id=177821: After some analysis it seems to be that the problem is in alloc_super(). In case list_lru_init_memcg() fails it goes into destroy_super(), which calls list_lru_destroy(). And in list_lru_init() we see that in case memcg_init_list_lru() fails, lru->node is freed, but not set NULL, which then leads list_lru_destroy() to believe it is initialized and call memcg_destroy_list_lru(). memcg_destroy_list_lru() in turn can access lru->node[i].memcg_lrus, which is NULL. [akpm@linux-foundation.org: add comment] Signed-off-by: Alexander Polakov <apolyakov@beget.ru> Acked-by: Vladimir Davydov <vdavydov.dev@gmail.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Darrick J. Wong authored
commit 58d78967 upstream. The function xfs_calc_dquots_per_chunk takes a parameter in units of basic blocks. The kernel seems to get the units wrong, but userspace got 'fixed' by commenting out the unnecessary conversion. Fix both. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lars-Peter Clausen authored
commit 953b956a upstream. When allocating a new line handle or event a file is allocated that it is associated to. The file is attached to a file descriptor of the current process and the file descriptor is returned to userspace using copy_to_user(). If this copy operation fails the line handle or event allocation is aborted, all acquired resources are freed and an error is returned. But the file struct is not freed and left attached to the userspace application and even though the file descriptor number was not copied it is trivial to guess. If a userspace application performs a IOCTL on such a left over file descriptor it will trigger a use-after-free and if the file descriptor is closed (latest when the application exits) a double-free is triggered. anon_inode_getfd() performs 3 tasks, allocate a file struct, allocate a file descriptor for the current process and install the file struct in the file descriptor. As soon as the file struct is installed in the file descriptor it is accessible by userspace (even if the IOCTL itself hasn't completed yet), this means uninstalling the fd on the error path is not an option, since userspace might already got a reference to the file. Instead anon_inode_getfd() needs to be broken into its individual steps. The allocation of the file struct and file descriptor is done first, then the copy_to_user() is executed and only if it succeeds the file is installed. Since the file struct is reference counted it can not be just freed, but its reference needs to be dropped, which will also call the release() callback, which will free the state attached to the file. So in this case the normal error cleanup path should not be taken. Fixes: d932cd49 ("gpio: free handles in fringe cases") Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lars-Peter Clausen authored
commit d82aa4a8 upstream. The GPIOHANDLE_GET_LINE_VALUES_IOCTL handler allocates a gpiohandle_data struct on the stack and then passes it to copy_to_user(). But only the first element of the values array in the struct is set, which leaves the struct partially initialized. This exposes the previous, potentially sensitive, stack content to the issuing userspace application. To avoid this make sure that the struct is fully initialized. Cc: stable@vger.kernel.org Fixes: 61f922db ("gpio: userspace ABI for reading GPIO line events") Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lars-Peter Clausen authored
commit ac7dbb99 upstream. The GPIO_GET_LINEEVENT_IOCTL currently ignores unknown or undefined linehandle and lineevent flags. From a backwards and forwards compatibility viewpoint it is highly desirable to reject unknown flags though. On one hand an application that is using newer flags and is running on an older kernel has no way to detect if the new flags were handled correctly if they are silently discarded. On the other hand an application that (accidentally) passes undefined flags will run fine on an older kernel, but may break on a newer kernel when these flags get defined. Ensure that requests that have undefined flags set are rejected with an error, rather than silently discarding the undefined flags. Fixes: 61f922db ("gpio: userspace ABI for reading GPIO line events") Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lars-Peter Clausen authored
commit e3e847c7 upstream. The GPIO_GET_LINEHANDLE_IOCTL currently ignores unknown or undefined linehandle flags. From a backwards and forwards compatibility viewpoint it is highly desirable to reject unknown flags though. On one hand an application that is using newer flags and is running on an older kernel has no way to detect if the new flags were handled correctly if they are silently discarded. On the other hand an application that (accidentally) passes undefined flags will run fine on an older kernel, but may break on a newer kernel when these flags get defined. Ensure that requests that have undefined flags set are rejected with an error, rather than silently discarding the undefined flags. Fixes: d7c51b47 ("gpio: userspace ABI for reading/writing GPIO lines") Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lars-Peter Clausen authored
commit b8b0e3d3 upstream. The line offset that is used as an index into the descs array is provided by userspace and might go beyond the bounds of the array. If that happens undefined behavior will occur. Make sure that the offset is within the bounds of the desc array and reject any requests that specify a value outside of it. Fixes: 61f922db ("gpio: userspace ABI for reading GPIO line events") Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lars-Peter Clausen authored
commit 3eded5d8 upstream. The GPIOHANDLE_GET_LINE_VALUES_IOCTL handler allocates a gpiohandle_data struct on the stack and then passes it to copy_to_user(). But depending on the number of requested line handles the struct is only partially initialized. This exposes the previous, potentially sensitive, stack content to the issuing userspace application. To avoid this make sure that the struct is fully initialized. Fixes: d7c51b47 ("gpio: userspace ABI for reading/writing GPIO lines") Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lars-Peter Clausen authored
commit e405f9fc upstream. The line offset that is used as an index into the descs array is provided by userspace and might go beyond the bounds of the array. If that happens undefined behavior will occur. Make sure that the offset is within the bounds of the desc array and reject any requests that specify a value outside of it. Fixes: d7c51b47 ("gpio: userspace ABI for reading/writing GPIO lines") Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lars-Peter Clausen authored
commit 0f4bbb23 upstream. The GPIO_GET_CHIPINFO_IOCTL handler allocates a gpiochip_info struct on the stack and then passes it to copy_to_user(). But depending on the length of the GPIO chip name and label the struct is only partially initialized. This exposes the previous, potentially sensitive, stack content to the issuing userspace application. To avoid this make sure that the struct is fully initialized. Fixes: 521a2ad6 ("gpio: add userspace ABI for GPIO line information") Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lars-Peter Clausen authored
commit 1f1cc456 upstream. The current line offset validation is off by one. Depending on the data stored behind the descs array this can either cause undefined behavior or disclose arbitrary, potentially sensitive, memory to the issuing userspace application. Make sure that offset is within the bounds of the desc array. Fixes: 521a2ad6 ("gpio: add userspace ABI for GPIO line information") Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
David Arcari authored
commit 67bf5156 upstream. acpi_dev_gpio_irq_get() currently ignores the error returned by acpi_get_gpiod_by_index() and overwrites it with -ENOENT. Problem is this error can be -EPROBE_DEFER, which just blows up some drivers when the module ordering is not correct. Signed-off-by: David Arcari <darcari@redhat.com> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mark Rutland authored
commit 21753583 upstream. Back in commit f56141e3 ("all arches, signal: move restart_block to struct task_struct"), all architectures and core code were changed to use task_struct::restart_block. However, when h8300 support was subsequently restored in v4.2, it was not updated to account for this, and maintains thread_info::restart_block, which is not kept in sync. This patch drops the redundant restart_block from thread_info, and moves h8300 to the common one in task_struct, ensuring that syscall restarting always works as expected. Fixes: f56141e3 ("all arches, signal: move restart_block to struct task_struct") Link: http://lkml.kernel.org/r/1476714934-11635-1-git-send-email-mark.rutland@arm.comSigned-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: uclinux-h8-devel@lists.sourceforge.jp Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ralf Ramsauer authored
commit e0af98a7 upstream. Instantiated SPI device nodes are marked with OF_POPULATE. This was introduced in bd6c1644. On unloading, loaded device nodes will of course be unmarked. The problem are nodes that fail during initialisation: If a node fails, it won't be unloaded and hence not be unmarked. If a SPI driver module is unloaded and reloaded, it will skip nodes that failed before. Skip device nodes that are already populated and mark them only in case of success. Note that the same issue exists for I2C. Fixes: bd6c1644 ("spi: Mark instantiated device nodes with OF_POPULATE") Signed-off-by: Ralf Ramsauer <ralf@ramses-pyramidenbau.de> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Pantelis Antoniou <pantelis.antoniou@konsulko.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Arnd Bergmann authored
commit 5c0ba577 upstream. When we get a spurious interrupt in fsl_espi_irq, we end up processing four uninitalized bytes of data, as shown in this warning message: drivers/spi/spi-fsl-espi.c: In function 'fsl_espi_irq': drivers/spi/spi-fsl-espi.c:462:4: warning: 'rx_data' may be used uninitialized in this function [-Wmaybe-uninitialized] This adds another check so we skip the data in this case. Fixes: 6319a680 ("spi/fsl-espi: avoid infinite loops on fsl_espi_cpu_irq()") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ville Syrjälä authored
commit 36e3fa6a upstream. The i2c adapter is only relevant for some peer device types, so let's clear the pdt if it's still the same as the old_pdt when we tear down the i2c adapter. I don't really like this design pattern of updating port->whatever before doing the accompanying changes and passing around old_whatever to figure stuff out. Would make much more sense to me to the pass the new value around and only update the port->whatever when things are consistent. But let's try to work with what we have right now. Quoting a follow-up from Ville: "And naturally I forgot to amend the commit message w.r.t. this guy [the change in drm_dp_destroy_port]. We don't really need to do this here, but I figured I'd try to be a bit more consistent by having it, just to avoid accidental mistakes if/when someone changes this stuff again later." v2: Clear port->pdt in the caller, if needed (Daniel) Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Carlos Santa <carlos.santa@intel.com> Cc: Kirill A. Shutemov <kirill@shutemov.name> Tested-by: Carlos Santa <carlos.santa@intel.com> Tested-by: Kirill A. Shutemov <kirill@shutemov.name> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97666Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1477488633-16544-1-git-send-email-ville.syrjala@linux.intel.comSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vladimir Zapolskiy authored
commit 147b36d5 upstream. Race condition between registering an I2C device driver and deregistering an I2C adapter device which is assumed to manage that I2C device may lead to a NULL pointer dereference due to the uninitialized list head of driver clients. The root cause of the issue is that the I2C bus may know about the registered device driver and thus it is matched by bus_for_each_drv(), but the list of clients is not initialized and commonly it is NULL, because I2C device drivers define struct i2c_driver as static and clients field is expected to be initialized by I2C core: i2c_register_driver() i2c_del_adapter() driver_register() ... bus_add_driver() ... ... bus_for_each_drv(..., __process_removed_adapter) ... i2c_do_del_adapter() ... list_for_each_entry_safe(..., &driver->clients, ...) INIT_LIST_HEAD(&driver->clients); To solve the problem it is sufficient to do clients list head initialization before calling driver_register(). The problem was found while using an I2C device driver with a sluggish registration routine on a bus provided by a physically detachable I2C master controller, but practically the oops may be reproduced under the race between arbitraty I2C device driver registration and managing I2C bus device removal e.g. by unbinding the latter over sysfs: % echo 21a4000.i2c > /sys/bus/platform/drivers/imx-i2c/unbind Unable to handle kernel NULL pointer dereference at virtual address 00000000 Internal error: Oops: 17 [#1] SMP ARM CPU: 2 PID: 533 Comm: sh Not tainted 4.9.0-rc3+ #61 Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree) task: e5ada400 task.stack: e4936000 PC is at i2c_do_del_adapter+0x20/0xcc LR is at __process_removed_adapter+0x14/0x1c Flags: NzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none Control: 10c5387d Table: 35bd004a DAC: 00000051 Process sh (pid: 533, stack limit = 0xe4936210) Stack: (0xe4937d28 to 0xe4938000) Backtrace: [<c0667be0>] (i2c_do_del_adapter) from [<c0667cc0>] (__process_removed_adapter+0x14/0x1c) [<c0667cac>] (__process_removed_adapter) from [<c0516998>] (bus_for_each_drv+0x6c/0xa0) [<c051692c>] (bus_for_each_drv) from [<c06685ec>] (i2c_del_adapter+0xbc/0x284) [<c0668530>] (i2c_del_adapter) from [<bf0110ec>] (i2c_imx_remove+0x44/0x164 [i2c_imx]) [<bf0110a8>] (i2c_imx_remove [i2c_imx]) from [<c051a838>] (platform_drv_remove+0x2c/0x44) [<c051a80c>] (platform_drv_remove) from [<c05183d8>] (__device_release_driver+0x90/0x12c) [<c0518348>] (__device_release_driver) from [<c051849c>] (device_release_driver+0x28/0x34) [<c0518474>] (device_release_driver) from [<c0517150>] (unbind_store+0x80/0x104) [<c05170d0>] (unbind_store) from [<c0516520>] (drv_attr_store+0x28/0x34) [<c05164f8>] (drv_attr_store) from [<c0298acc>] (sysfs_kf_write+0x50/0x54) [<c0298a7c>] (sysfs_kf_write) from [<c029801c>] (kernfs_fop_write+0x100/0x214) [<c0297f1c>] (kernfs_fop_write) from [<c0220130>] (__vfs_write+0x34/0x120) [<c02200fc>] (__vfs_write) from [<c0221088>] (vfs_write+0xa8/0x170) [<c0220fe0>] (vfs_write) from [<c0221e74>] (SyS_write+0x4c/0xa8) [<c0221e28>] (SyS_write) from [<c0108a20>] (ret_fast_syscall+0x0/0x1c) Signed-off-by: Vladimir Zapolskiy <vladimir_zapolskiy@mentor.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hoan Tran authored
commit 60361601 upstream. SMBus block command uses the first byte of buffer for the data length. The dma_buffer should be increased by 1 to avoid the overrun issue. Reported-by: Phil Endecott <phil_gjouf_endecott@chezphil.org> Signed-off-by: Hoan Tran <hotran@apm.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
David Wu authored
commit 399c168a upstream. We found a bug that i2c transfer sometimes failed on 3066a board with stabel-4.8, the con register would be updated by uninitialized tuning value, it made the i2c transfer failed. So give the tuning value to be zero during rk3x_i2c_v0_calc_timings. Signed-off-by: David Wu <david.wu@rock-chips.com> Tested-by: Andy Yan <andy.yan@rock-chips.com> Reviewed-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Wolfram Sang <wsa@the-dreams.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 31 Oct, 2016 2 commits
-
-
Greg Kroah-Hartman authored
-
Vishal Verma authored
commit e046114a upstream. nvdimm_clear_poison cleared the user-visible badblocks, and sent commands to the NVDIMM to clear the areas marked as 'poison', but it neglected to clear the same areas from the internal poison_list which is used to marshal ARS results before sorting them by namespace. As a result, once on-demand ARS functionality was added: 37b137ff nfit, libnvdimm: allow an ARS scrub to be triggered on demand A scrub triggered from either sysfs or an MCE was found to be adding stale entries that had been cleared from gendisk->badblocks, but were still present in nvdimm_bus->poison_list. Additionally, the stale entries could be triggered into producing stale disk->badblocks by simply disabling and re-enabling the namespace or region. This adds the missing step of clearing poison_list entries when clearing poison, so that it is always in sync with badblocks. Fixes: 37b137ff ("nfit, libnvdimm: allow an ARS scrub to be triggered on demand") Signed-off-by: Vishal Verma <vishal.l.verma@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-