- 05 Sep, 2014 40 commits
-
-
Alex Deucher authored
commit 6dc14baf upstream. bug: https://bugs.freedesktop.org/show_bug.cgi?id=82912Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Theodore Ts'o authored
commit c99d1e6e upstream. If we suffer a block allocation failure (for example due to a memory allocation failure), it's possible that we will call ext4_discard_allocated_blocks() before we've actually allocated any blocks. In that case, fe_len and fe_start in ac->ac_f_ex will still be zero, and this will result in mb_free_blocks(inode, e4b, 0, 0) triggering the BUG_ON on mb_free_blocks(): BUG_ON(last >= (sb->s_blocksize << 3)); Fix this by bailing out of ext4_discard_allocated_blocks() if fs_len is zero. Also fix a missing ext4_mb_unload_buddy() call in ext4_discard_allocated_blocks(). Google-Bug-Id: 16844242 Fixes: 86f0afd4Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michael S. Tsirkin authored
commit 350b8bdd upstream. The third parameter of kvm_iommu_put_pages is wrong, It should be 'gfn - slot->base_gfn'. By making gfn very large, malicious guest or userspace can cause kvm to go to this error path, and subsequently to pass a huge value as size. Alternatively if gfn is small, then pages would be pinned but never unpinned, causing host memory leak and local DOS. Passing a reasonable but large value could be the most dangerous case, because it would unpin a page that should have stayed pinned, and thus allow the device to DMA into arbitrary memory. However, this cannot happen because of the condition that can trigger the error: - out of memory (where you can't allocate even a single page) should not be possible for the attacker to trigger - when exceeding the iommu's address space, guest pages after gfn will also exceed the iommu's address space, and inside kvm_iommu_put_pages() the iommu_iova_to_phys() will fail. The page thus would not be unpinned at all. Reported-by: Jack Morgenstein <jackm@mellanox.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paolo Bonzini authored
commit 0d234daf upstream. This reverts commit 682367c4, which causes 32-bit SMP Windows 7 guests to panic. SeaBIOS has a limit on the number of MTRRs that it can handle, and this patch exceeded the limit. Better revert it. Thanks to Nadav Amit for debugging the cause. Reported-by: Wanpeng Li <wanpeng.li@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Wanpeng Li authored
commit 56cc2406 upstream. After commit 77b0f5d6 (KVM: nVMX: Ack and write vector info to intr_info if L1 asks us to), "Acknowledge interrupt on exit" behavior can be emulated. To do so, KVM will ask the APIC for the interrupt vector if during a nested vmexit if VM_EXIT_ACK_INTR_ON_EXIT is set. With APICv, kvm_get_apic_interrupt would return -1 and give the following WARNING: Call Trace: [<ffffffff81493563>] dump_stack+0x49/0x5e [<ffffffff8103f0eb>] warn_slowpath_common+0x7c/0x96 [<ffffffffa059709a>] ? nested_vmx_vmexit+0xa4/0x233 [kvm_intel] [<ffffffff8103f11a>] warn_slowpath_null+0x15/0x17 [<ffffffffa059709a>] nested_vmx_vmexit+0xa4/0x233 [kvm_intel] [<ffffffffa0594295>] ? nested_vmx_exit_handled+0x6a/0x39e [kvm_intel] [<ffffffffa0537931>] ? kvm_apic_has_interrupt+0x80/0xd5 [kvm] [<ffffffffa05972ec>] vmx_check_nested_events+0xc3/0xd3 [kvm_intel] [<ffffffffa051ebe9>] inject_pending_event+0xd0/0x16e [kvm] [<ffffffffa051efa0>] vcpu_enter_guest+0x319/0x704 [kvm] To fix this, we cannot rely on the processor's virtual interrupt delivery, because "acknowledge interrupt on exit" must only update the virtual ISR/PPR/IRR registers (and SVI, which is just a cache of the virtual ISR) but it should not deliver the interrupt through the IDT. Thus, KVM has to deliver the interrupt "by hand", similar to the treatment of EOI in commit fc57ac2c (KVM: lapic: sync highest ISR to hardware apic on EOI, 2014-05-14). The patch modifies kvm_cpu_get_interrupt to always acknowledge an interrupt; there are only two callers, and the other is not affected because it is never reached with kvm_apic_vid_enabled() == true. Then it modifies apic_set_isr and apic_clear_irr to update SVI and RVI in addition to the registers. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Suggested-by: "Zhang, Yang Z" <yang.z.zhang@intel.com> Tested-by: Liu, RongrongX <rongrongx.liu@intel.com> Tested-by: Felipe Reyes <freyes@suse.com> Fixes: 77b0f5d6Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexey Kardashevskiy authored
commit a0840240 upstream. Unfortunately, the LPCR got defined as a 32-bit register in the one_reg interface. This is unfortunate because KVM allows userspace to control the DPFD (default prefetch depth) field, which is in the upper 32 bits. The result is that DPFD always get set to 0, which reduces performance in the guest. We can't just change KVM_REG_PPC_LPCR to be a 64-bit register ID, since that would break existing userspace binaries. Instead we define a new KVM_REG_PPC_LPCR_64 id which is 64-bit. Userspace can still use the old KVM_REG_PPC_LPCR id, but it now only modifies those fields in the bottom 32 bits that userspace can modify (ILE, TC and AIL). If userspace uses the new KVM_REG_PPC_LPCR_64 id, it can modify DPFD as well. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christian Borntraeger authored
commit 55e4283c upstream. commit ec66ad66 (s390/mm: enable split page table lock for PMD level) activated the split pmd lock for s390. Turns out that we missed one place: We also have to take the pmd lock instead of the page table lock when we reallocate the page tables (==> changing entries in the PMD) during sie enablement. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paolo Bonzini authored
commit 0f6c0a74 upstream. Currently, the EOI exit bitmap (used for APICv) does not include interrupts that are masked. However, this can cause a bug that manifests as an interrupt storm inside the guest. Alex Williamson reported the bug and is the one who really debugged this; I only wrote the patch. :) The scenario involves a multi-function PCI device with OHCI and EHCI USB functions and an audio function, all assigned to the guest, where both USB functions use legacy INTx interrupts. As soon as the guest boots, interrupts for these devices turn into an interrupt storm in the guest; the host does not see the interrupt storm. Basically the EOI path does not work, and the guest continues to see the interrupt over and over, even after it attempts to mask it at the APIC. The bug is only visible with older kernels (RHEL6.5, based on 2.6.32 with not many changes in the area of APIC/IOAPIC handling). Alex then tried forcing bit 59 (corresponding to the USB functions' IRQ) on in the eoi_exit_bitmap and TMR, and things then work. What happens is that VFIO asserts IRQ11, then KVM recomputes the EOI exit bitmap. It does not have set bit 59 because the RTE was masked, so the IOAPIC never sees the EOI and the interrupt continues to fire in the guest. My guess was that the guest is masking the interrupt in the redirection table in the interrupt routine, i.e. while the interrupt is set in a LAPIC's ISR, The simplest fix is to ignore the masking state, we would rather have an unnecessary exit rather than a missed IRQ ACK and anyway IOAPIC interrupts are not as performance-sensitive as for example MSIs. Alex tested this patch and it fixed his bug. [Thanks to Alex for his precise description of the problem and initial debugging effort. A lot of the text above is based on emails exchanged with him.] Reported-by: Alex Williamson <alex.williamson@redhat.com> Tested-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nadav Amit authored
commit 9e8919ae upstream. Return unhandlable error on inter-privilege level ret instruction. This is since the current emulation does not check the privilege level correctly when loading the CS, and does not pop RSP/SS as needed. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steven Rostedt authored
commit 485d4402 upstream. [ I'm currently running my tests on it now, and so far, after a few hours it has yet to blow up. I'll run it for 24 hours which it never succeeded in the past. ] The tracing code has a way to make directories within the debugfs file system as well as deleting them using mkdir/rmdir in the instance directory. This is very limited in functionality, such as there is no renames, and the parent directory "instance" can not be modified. The tracing code creates the instance directory from the debugfs code and then replaces the dentry->d_inode->i_op with its own to allow for mkdir/rmdir to work. When these are called, the d_entry and inode locks need to be released to call the instance creation and deletion code. That code has its own accounting and locking to serialize everything to prevent multiple users from causing harm. As the parent "instance" directory can not be modified this simplifies things. I created a stress test that creates several threads that randomly creates and deletes directories thousands of times a second. The code stood up to this test and I submitted it a while ago. Recently I added a new test that adds readers to the mix. While the instance directories were being added and deleted, readers would read from these directories and even enable tracing within them. This test was able to trigger a bug: general protection fault: 0000 [#1] PREEMPT SMP Modules linked in: ... CPU: 3 PID: 17789 Comm: rmdir Tainted: G W 3.15.0-rc2-test+ #41 Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS SDBLI944.86P 05/08/2007 task: ffff88003786ca60 ti: ffff880077018000 task.ti: ffff880077018000 RIP: 0010:[<ffffffff811ed5eb>] [<ffffffff811ed5eb>] debugfs_remove_recursive+0x1bd/0x367 RSP: 0018:ffff880077019df8 EFLAGS: 00010246 RAX: 0000000000000002 RBX: ffff88006f0fe490 RCX: 0000000000000000 RDX: dead000000100058 RSI: 0000000000000246 RDI: ffff88003786d454 RBP: ffff88006f0fe640 R08: 0000000000000628 R09: 0000000000000000 R10: 0000000000000628 R11: ffff8800795110a0 R12: ffff88006f0fe640 R13: ffff88006f0fe640 R14: ffffffff81817d0b R15: ffffffff818188b7 FS: 00007ff13ae24700(0000) GS:ffff88007d580000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 0000003054ec7be0 CR3: 0000000076d51000 CR4: 00000000000007e0 Stack: ffff88007a41ebe0 dead000000100058 00000000fffffffe ffff88006f0fe640 0000000000000000 ffff88006f0fe678 ffff88007a41ebe0 ffff88003793a000 00000000fffffffe ffffffff810bde82 ffff88006f0fe640 ffff88007a41eb28 Call Trace: [<ffffffff810bde82>] ? instance_rmdir+0x15b/0x1de [<ffffffff81132e2d>] ? vfs_rmdir+0x80/0xd3 [<ffffffff81132f51>] ? do_rmdir+0xd1/0x139 [<ffffffff8124ad9e>] ? trace_hardirqs_on_thunk+0x3a/0x3c [<ffffffff814fea62>] ? system_call_fastpath+0x16/0x1b Code: fe ff ff 48 8d 75 30 48 89 df e8 c9 fd ff ff 85 c0 75 13 48 c7 c6 b8 cc d2 81 48 c7 c7 b0 cc d2 81 e8 8c 7a f5 ff 48 8b 54 24 08 <48> 8b 82 a8 00 00 00 48 89 d3 48 2d a8 00 00 00 48 89 44 24 08 RIP [<ffffffff811ed5eb>] debugfs_remove_recursive+0x1bd/0x367 RSP <ffff880077019df8> It took a while, but every time it triggered, it was always in the same place: list_for_each_entry_safe(child, next, &parent->d_subdirs, d_u.d_child) { Where the child->d_u.d_child seemed to be corrupted. I added lots of trace_printk()s to see what was wrong, and sure enough, it was always the child's d_u.d_child field. I looked around to see what touches it and noticed that in __dentry_kill() which calls dentry_free(): static void dentry_free(struct dentry *dentry) { /* if dentry was never visible to RCU, immediate free is OK */ if (!(dentry->d_flags & DCACHE_RCUACCESS)) __d_free(&dentry->d_u.d_rcu); else call_rcu(&dentry->d_u.d_rcu, __d_free); } I also noticed that __dentry_kill() unlinks the child->d_u.child under the parent->d_lock spin_lock. Looking back at the loop in debugfs_remove_recursive() it never takes the parent->d_lock to do the list walk. Adding more tracing, I was able to prove this was the issue: ftrace-t-15385 1.... 246662024us : dentry_kill <ffffffff81138b91>: free ffff88006d573600 rmdir-15409 2.... 246662024us : debugfs_remove_recursive <ffffffff811ec7e5>: child=ffff88006d573600 next=dead000000100058 The dentry_kill freed ffff88006d573600 just as the remove recursive was walking it. In order to fix this, the list walk needs to be modified a bit to take the parent->d_lock. The safe version is no longer necessary, as every time we remove a child, the parent->d_lock must be released and the list walk must start over. Each time a child is removed, even though it may still be on the list, it should be skipped by the first check in the loop: if (!debugfs_positive(child)) continue; Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Arnd Bergmann authored
commit e1f8859e upstream. The interrupt handler in the ux500 crypto driver has an obviously incorrect way to access the data buffer, which for a while has caused this build warning: ../ux500/cryp/cryp_core.c: In function 'cryp_interrupt_handler': ../ux500/cryp/cryp_core.c:234:5: warning: passing argument 1 of '__fswab32' makes integer from pointer without a cast [enabled by default] writel_relaxed(ctx->indata, ^ In file included from ../include/linux/swab.h:4:0, from ../include/uapi/linux/byteorder/big_endian.h:12, from ../include/linux/byteorder/big_endian.h:4, from ../arch/arm/include/uapi/asm/byteorder.h:19, from ../include/asm-generic/bitops/le.h:5, from ../arch/arm/include/asm/bitops.h:340, from ../include/linux/bitops.h:33, from ../include/linux/kernel.h:10, from ../include/linux/clk.h:16, from ../drivers/crypto/ux500/cryp/cryp_core.c:12: ../include/uapi/linux/swab.h:57:119: note: expected '__u32' but argument is of type 'const u8 *' static inline __attribute_const__ __u32 __fswab32(__u32 val) There are at least two, possibly three problems here: a) when writing into the FIFO, we copy the pointer rather than the actual data we want to give to the hardware b) the data pointer is an array of 8-bit values, while the FIFO is 32-bit wide, so both the read and write access fail to do a proper type conversion c) This seems incorrect for big-endian kernels, on which we need to byte-swap any register access, but not normally FIFO accesses, at least the DMA case doesn't do it either. This converts the bogus loop to use the same readsl/writesl pair that we use for the two other modes (DMA and polling). This is more efficient and consistent, and probably correct for endianess. The bug has existed since the driver was first merged, and was probably never detected because nobody tried to use interrupt mode. It might make sense to backport this fix to stable kernels, depending on how the crypto maintainers feel about that. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Cc: linux-crypto@vger.kernel.org Cc: Fabio Baltieri <fabio.baltieri@linaro.org> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Peter Hurley authored
commit ae84db96 upstream. When a tty is opened for the serial console, the termios c_cflag settings are inherited from the console line settings. However, if the tty is subsequently closed, the termios settings are lost. This results in a garbled console if the console is later suspended and resumed. Preserve the termios c_cflag for the serial console when the tty is shutdown; this reflects the most recent line settings. Fixes: Bugzilla #69751, 'serial console does not wake from S3' Reported-by: Valerio Vanni <valerio.vanni@inwind.it> Acked-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Theodore Ts'o authored
commit 86f0afd4 upstream. If there is a failure while allocating the preallocation structure, a number of blocks can end up getting marked in the in-memory buddy bitmap, and then not getting released. This can result in the following corruption getting reported by the kernel: EXT4-fs error (device sda3): ext4_mb_generate_buddy:758: group 1126, 12793 clusters in bitmap, 12729 in gd In that case, we need to release the blocks using mb_free_blocks(). Tested: fs smoke test; also demonstrated that with injected errors, the file system is no longer getting corrupted Google-Bug-Id: 16657874 Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lukas Czerner authored
commit 4f579ae7 upstream. Currently punch hole code on files with direct/indirect mapping has some problems which may lead to a data loss. For example (from Jan Kara): fallocate -n -p 10240000 4096 will punch the range 10240000 - 12632064 instead of the range 1024000 - 10244096. Also the code is a bit weird and it's not using infrastructure provided by indirect.c, but rather creating it's own way. This patch fixes the issues as well as making the operation to run 4 times faster from my testing (punching out 60GB file). It uses similar approach used in ext4_ind_truncate() which takes advantage of ext4_free_branches() function. Also rename the ext4_free_hole_blocks() to something more sensible, like the equivalent we have for extent mapped files. Call it ext4_ind_remove_space(). This has been tested mostly with fsx and some xfstests which are testing punch hole but does not require unwritten extents which are not supported with direct/indirect mapping. Not problems showed up even with 1024k block size. Signed-off-by: Lukas Czerner <lczerner@redhat.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
addy ke authored
commit 9c5f7cad upstream. If slave holds scl, I2C_IPD[7] will be set 1 by controller for debugging. Driver must ignore it. [ 5.752391] rk3x-i2c ff160000.i2c: unexpected irq in WRITE: 0x80 [ 5.939027] rk3x-i2c ff160000.i2c: timeout, ipd: 0x80, state: 4 Signed-off-by: Addy Ke <addy.ke@rock-chips.com> Reviewed-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Wolfram Sang <wsa@the-dreams.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Wolfram Sang authored
commit 28772ac8 upstream. dma_{un}map_* uses 'enum dma_data_direction' not 'enum dma_transfer_direction'. Signed-off-by: Wolfram Sang <wsa@the-dreams.de> Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jason Gunthorpe authored
commit f07a5e9a upstream. Most device drivers do call 'tpm_do_selftest' which executes a TPM_ContinueSelfTest. tpm_i2c_stm_st33 is just pointlessly different, I think it is bug. These days we have the general assumption that the TPM is usable by the kernel immediately after the driver is finished, so we can no longer defer the mandatory self test to userspace. Reported-by: Richard Marciel <rmaciel@linux.vnet.ibm.com> Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: Peter Huewe <peterhuewe@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Axel Lin authored
commit 5b963089 upstream. On platforms with sizeof(int) < sizeof(long), writing a temperature limit larger than MAXINT will result in unpredictable limit values written to the chip. Avoid auto-conversion from long to int to fix the problem. The hysteresis temperature range depends on the value of data->temp[attr->index], since val is subtracted from it. Use a wider clamp, [-120000, 220000] should do to cover the possible range. Also add missing TEMP_TO_REG() on writes into cached hysteresis value. Also uses clamp_val to simplify the code a bit. Signed-off-by: Axel Lin <axel.lin@ingics.com> [Guenter Roeck: Fixed double TEMP_TO_REG on hysteresis updates] Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Axel Lin authored
commit d58e47d7 upstream. On platforms with sizeof(int) < sizeof(long), writing a temperature limit larger than MAXINT will result in unpredictable limit values written to the chip. Avoid auto-conversion from long to int to fix the problem. Voltage limits, fan minimum speed, pwm frequency, pwm ramp rate, and other attributes have the same problem, fix them as well. Zone temperature limits are signed, but were cached as u8, causing unepected values to be reported for negative temperatures. Cache as s8 to fix the problem. vrm is an u8, so the written value needs to be limited to [0, 255]. Signed-off-by: Axel Lin <axel.lin@ingics.com> [Guenter Roeck: Fix zone temperature cache] Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Axel Lin authored
commit e9814295 upstream. Current code uses data_rate as array index in ads1015_read_adc() and uses pga as array index in ads1015_reg_to_mv, so we must make sure both data_rate and pga settings are in valid value range. Return -EINVAL if the setting is out-of-range. Signed-off-by: Axel Lin <axel.lin@ingics.com> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guenter Roeck authored
commit 3248c3b7 upstream. Temperature limit register writes did not account for negative numbers. As a result, writing -127000 resulted in -126000 written into the temperature limit register. This problem affected temp[1-3]_min, temp[1-3]_max, temp[1-3]_auto_temp_crit, and temp[1-3]_auto_temp_min. When writing pwm[1-3]_freq, a long variable was auto-converted into an int without range check. Wiring values larger than MAXINT resulted in unexpected register values. When writing temp[1-3]_auto_temp_max, an unsigned long variable was auto-converted into an int without range check. Writing values larger than MAXINT resulted in unexpected register values. vrm is an u8, so the written value needs to be limited to [0, 255]. Cc: Axel Lin <axel.lin@ingics.com> Reviewed-by: Axel Lin <axel.lin@ingics.com> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Axel Lin authored
commit 56de1377 upstream. Current code uses channel as array index, so the valid channel value is 0 .. ADS1015_CHANNELS - 1. Signed-off-by: Axel Lin <axel.lin@ingics.com> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Axel Lin authored
commit 2565fb05 upstream. On platforms with sizeof(int) < sizeof(unsigned long), writing a rpm value larger than MAXINT will result in unpredictable limit values written to the chip. Avoid auto-conversion from unsigned long to int to fix the problem. Signed-off-by: Axel Lin <axel.lin@ingics.com> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guenter Roeck authored
commit 1074d683 upstream. On platforms with sizeof(int) < sizeof(long), writing a temperature limit larger than MAXINT will result in unpredictable limit values written to the chip. Avoid auto-conversion from long to int to fix the problem. Cc: Axel Lin <axel.lin@ingics.com> Reviewed-by: Axel Lin <axel.lin@ingics.com> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Axel Lin authored
commit cf44819c upstream. Ensure mutex lock protects the read-modify-write period to prevent possible race condition bug. In additional, update data->valid should also be protected by the mutex lock. Signed-off-by: Axel Lin <axel.lin@ingics.com> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Axel Lin authored
commit cc336546 upstream. On platforms with sizeof(int) < sizeof(long), writing a temperature limit larger than MAXINT will result in unpredictable limit values written to the chip. Avoid auto-conversion from long to int to fix the problem. Signed-off-by: Axel Lin <axel.lin@ingics.com> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ulf Hansson authored
commit ad82bfea upstream. This patch won't change the behavior of how mmci deals with CMD irqs. By moving code from mmci_irq() to mmci_cmd_irq(), we getter a better overview of what going on. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Russell King <linux@arm.linux.org.uk> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ulf Hansson authored
commit 1cb9da50 upstream. We don't need to verify the content of the status register twice, while we are about to handle a DATA irq. Instead let's leave all verification to be handled by mmci_data_irq(). Cc: Peter Maydell <peter.maydell@linaro.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Russell King <linux@arm.linux.org.uk> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Russell King authored
commit 2d31ca3a upstream. Regular randconfig nightly testing has detected problems with omapdrm. omapdrm fails to build when the kernel is built to support 64-bit DMA addresses and/or 64-bit physical addresses due to an assumption about the width of these types. Use %pad to print DMA addresses, rather than %x or %Zx (which is even more wrong than %x). Avoid passing a uint32_t pointer into a function which expects dma_addr_t pointer. drivers/gpu/drm/omapdrm/omap_plane.c: In function 'omap_plane_pre_apply': drivers/gpu/drm/omapdrm/omap_plane.c:145:2: error: format '%x' expects argument of type 'unsigned int', but argument 5 has type 'dma_addr_t' [-Werror=format] drivers/gpu/drm/omapdrm/omap_plane.c:145:2: error: format '%x' expects argument of type 'unsigned int', but argument 6 has type 'dma_addr_t' [-Werror=format] make[5]: *** [drivers/gpu/drm/omapdrm/omap_plane.o] Error 1 drivers/gpu/drm/omapdrm/omap_gem.c: In function 'omap_gem_get_paddr': drivers/gpu/drm/omapdrm/omap_gem.c:794:4: error: format '%x' expects argument of type 'unsigned int', but argument 3 has type 'dma_addr_t' [-Werror=format] drivers/gpu/drm/omapdrm/omap_gem.c: In function 'omap_gem_describe': drivers/gpu/drm/omapdrm/omap_gem.c:991:4: error: format '%Zx' expects argument of type 'size_t', but argument 7 has type 'dma_addr_t' [-Werror=format] drivers/gpu/drm/omapdrm/omap_gem.c: In function 'omap_gem_init': drivers/gpu/drm/omapdrm/omap_gem.c:1470:4: error: format '%x' expects argument of type 'unsigned int', but argument 7 has type 'dma_addr_t' [-Werror=format] make[5]: *** [drivers/gpu/drm/omapdrm/omap_gem.o] Error 1 drivers/gpu/drm/omapdrm/omap_dmm_tiler.c: In function 'dmm_txn_append': drivers/gpu/drm/omapdrm/omap_dmm_tiler.c:226:2: error: passing argument 3 of 'alloc_dma' from incompatible pointer type [-Werror] make[5]: *** [drivers/gpu/drm/omapdrm/omap_dmm_tiler.o] Error 1 make[5]: Target `__build' not remade because of errors. make[4]: *** [drivers/gpu/drm/omapdrm] Error 2 Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jeremy Vial authored
commit 9b5f7428 upstream. According to the comment “restore_es3: applies to 34xx >= ES3.0" in "arch/arm/mach-omap2/sleep34xx.S”, omap3_restore_es3 should be used if the revision of an OMAP34xx is ES3.1.2. Signed-off-by: Jeremy Vial <jvial@adeneo-embedded.com> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Baruch Siach authored
commit bc994c77 upstream. Commit cb8db5d4 (UAPI: (Scripted) Disintegrate arch/arm/include/asm) moved these syscall comments out of their context into the UAPI headers. Fix this. Fixes: cb8db5d4 ("UAPI: (Scripted) Disintegrate arch/arm/include/asm") Signed-off-by: Baruch Siach <baruch@tkos.co.il> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Suman Anna authored
commit 44e6ab1b upstream. The mailbox DT node for AM4372 is enabled and is corrected to remove some properties that have crept in by mistake. Fixes: 9e3269b8 (ARM: dts: AM4372: Add L2, EDMA, mailbox, MMC and SHAM nodes) Signed-off-by: Suman Anna <s-anna@ti.com> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Semen Protsenko authored
commit 6a7519e8 upstream. "efi" global data structure contains "runtime_version" field which must be assigned in order to use it later in Runtime Services virtual calls (virt_efi_* functions). Before this patch "runtime_version" was unassigned (0), so each Runtime Service virtual call that checks revision would fail. Signed-off-by: Semen Protsenko <semen.protsenko@linaro.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Will Deacon authored
commit c878e0cf upstream. Our break hooks are used to handle brk exceptions from kgdb (and potentially kprobes if that code ever resurfaces), so don't bother calling them if the BRK exception comes from userspace. This prevents userspace from trapping to a kdb shell on systems where kgdb is enabled and active. Reported-by: Omar Sandoval <osandov@osandov.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Catalin Marinas authored
commit 7f0b1bf0 upstream. The architecture specification states that both DSB and ISB are required between page table modifications and subsequent memory accesses using the corresponding virtual address. When TLB invalidation takes place, the tlb_flush_* functions already have the necessary barriers. However, there are other functions like create_mapping() for which this is not the case. The patch adds the DSB+ISB instructions in the set_pte() function for valid kernel mappings. The invalid pte case is handled by tlb_flush_* and the user mappings in general have a corresponding update_mmu_cache() call containing a DSB. Even when update_mmu_cache() isn't called, the kernel can still cope with an unlikely spurious page fault by re-executing the instruction. In addition, the set_pmd, set_pud() functions gain an ISB for architecture compliance when block mappings are created. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Leif Lindholm <leif.lindholm@linaro.org> Acked-by: Steve Capper <steve.capper@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Daniel Bristot de Oliveira authored
commit d8d28c8f upstream. The scheduler uses policy == -1 to preserve the current policy state to implement sched_setparam(). But, as (int) -1 is equals to 0xffffffff, it's matching the if (policy & SCHED_RESET_ON_FORK) on _sched_setscheduler(). This match changes the policy value to an invalid value, breaking the sched_setparam() syscall. This patch checks policy == -1 before check the SCHED_RESET_ON_FORK flag. The following program shows the bug: int main(void) { struct sched_param param = { .sched_priority = 5, }; sched_setscheduler(0, SCHED_FIFO, ¶m); param.sched_priority = 1; sched_setparam(0, ¶m); param.sched_priority = 0; sched_getparam(0, ¶m); if (param.sched_priority != 1) printf("failed priority setting (found %d instead of 1)\n", param.sched_priority); else printf("priority setting fine\n"); } Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-kernel@vger.kernel.org Fixes: 7479f3c9 "sched: Move SCHED_RESET_ON_FORK into attr::sched_flags" Link: http://lkml.kernel.org/r/9ebe0566a08dbbb3999759d3f20d6004bb2dbcfa.1406079891.git.bristot@redhat.comSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hans de Goede authored
commit 8f873c1f upstream. Streams on the EJ168 do not work as they should. I've spend 2 days trying to get them to work, but without success. The first problem is that when ever you ring the stream-ring doorbell, the controller starts executing trbs at the beginning of the first ring segment, event if it ended somewhere else previously. This can be worked around by allowing enqueing only one td (not a problem with how streams are typically used) and then resetting our copies of the enqueueing en dequeueing pointers on a td completion to match what the controller seems to be doing. This way things seem to start working with uas and instead of being able to complete only the very first scsi command, the scsi core can probe the disk. But then things break later on when td-s get enqueued with more then one trb. The controller does seem to increase its dequeue pointer while executing a stream-ring (data transfer events I inserted for debugging do trigger). However execution seems to stop at the final normal trb of a multi trb td, even if there is a data transfer event inserted after the final trb. The first problem alone is a serious deviation from the spec, and esp. dealing with cancellation would have been very tricky if not outright impossible, but the second problem simply is a deal breaker altogether, so this patch simply disables streams. Note this will cause the usb-storage + uas driver pair to automatically switch to using usb-storage instead of uas on these devices, essentially reverting to the 3.14 and earlier behavior when uas was marked CONFIG_BROKEN. https://bugzilla.redhat.com/show_bug.cgi?id=1121288 https://bugzilla.kernel.org/show_bug.cgi?id=80101Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
commit fe2f17eb upstream. wait_event_timeout can return 0 or the remaining jiffies so return -ETIME if disconnected state not reached. Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
commit d5d83f8a upstream. Calling pm_schedule_suspend from the runtime pm idle callback may reschedule existing timer, thus in case of frequent runtime rpm idle call the suspend maybe starved. Instead we call pm_runtime_autosuspend which is checking if the timer is already charged. An example is monitoring device pci config space. Pci config sysfs handlers calls pci_config_pm_runtime_put/get helpers which in turns calls to device idle callback Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
commit 22b987a3 upstream. Link must be reset in case the fw doesn't respond to client disconnect request. We did charge the timer only in irq path from mei_cl_irq_close and not in mei_cl_disconnect Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-