- 18 Nov, 2016 29 commits
-
-
Felipe Balbi authored
commit fd9afd3c upstream. According to Dave Miller "the networking stack has a hard requirement that all SKBs which are transmitted must have their completion signalled in a fininte amount of time. This is because, until the SKB is freed by the driver, it holds onto socket, netfilter, and other subsystem resources." In summary, this means that using TX IRQ throttling for the networking gadgets is, at least, complex and we should avoid it for the time being. Reported-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Tested-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Suggested-by: David Miller <davem@davemloft.net> Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 18266403 upstream. The TIOCMIWAIT implementation would return -EINVAL if any of the three supported signals were included in the mask. Instead of returning an error in case TIOCM_CTS is included, simply drop the mask check completely, which is in accordance with how other drivers implement this ioctl. Fixes: 5a6a62bd ("cdc-acm: add TIOCMIWAIT") Signed-off-by: Johan Hovold <johan@kernel.org> Acked-by: Oliver Neukum <oneukum@suse.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vivek Gautam authored
commit 9b9d7cdd upstream. Fixing the sequence of events in dwc3_core_init() error exit path. dwc3_core_exit() call is also removed from the error path since, whatever it's doing is already done. Fixes: c499ff71 usb: dwc3: core: re-factor init and exit paths Cc: Felipe Balbi <felipe.balbi@linux.intel.com> Cc: Greg KH <gregkh@linuxfoundation.org> Signed-off-by: Vivek Gautam <vivek.gautam@codeaurora.org> Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Marc Dietrich authored
commit 68fae2f3 upstream. This basicly reverts commit e534f3e9 (staging:nvec: Introduce the use of the managed version of kzalloc). Serio struct should never by managed because it is refcounted. Doing so will lead to a double free oops on module remove. Signed-off-by: Marc Dietrich <marvin24@gmx.de> Fixes: e534f3e9 ("staging:nvec: Introduce the use of the managed version of kzalloc") Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paul Fertser authored
commit 17c1c9ba upstream. This reverts commit 36b30d61. This is necessary to detect paz00 (ac100) touchpad properly as one speaking ETPS/2 protocol. Without it X.org's synaptics driver doesn't work as the touchpad is detected as an ImPS/2 mouse instead. Commit ec6184b1 changed the way auto-detection is performed on ports marked as pass through and made the issue apparent. A pass through port is an additional PS/2 port used to connect a slave device to a master device that is using PS/2 to communicate with the host (so slave's PS/2 communication is tunneled over master's PS/2 link). "Synaptics PS/2 TouchPad Interfacing Guide" describes such a setup (PS/2 PASS-THROUGH OPTION section). Since paz00's embedded controller is not connected to a PS/2 port itself, the PS/2 interface it exposes is not a pass-through one. Signed-off-by: Paul Fertser <fercerpav@gmail.com> Acked-by: Marc Dietrich <marvin24@gmx.de> Fixes: 36b30d61 ("staging: nvec: ps2: change serio type to passthrough") Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paul Fertser authored
commit d8f8a74d upstream. This command was sent behind serio's back and the answer to it was confusing atkbd probe function which lead to the elantech touchpad getting detected as a keyboard. To prevent this from happening just let every party do its part of the job. Signed-off-by: Paul Fertser <fercerpav@gmail.com> Acked-by: Marc Dietrich <marvin24@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ian Abbott authored
commit 55abe816 upstream. `ni_tio_clock_period_ps()` used to return the clock period in picoseconds, and had a `BUG()` call for invalid cases. It was changed to pass the clock period back via a pointer parameter and return an error for the invalid cases. Unfortunately the code to handle user-specified clock sources with user-specified clock period is still returning the clock period the old way, which can lead to the caller not getting the clock period, or seeing an unexpected error. Fix it by passing the clock period via the pointer parameter and returning `0`. Fixes: b42ca86a ("staging: comedi: ni_tio: remove BUG() checks for ni_tio_get_clock_src()") Signed-off-by: Ian Abbott <abbotti@mev.co.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Huacai Chen authored
commit 33c027ae upstream. Early commit 30ca5cb6 ("staging: sm750fb: change definition of PANEL_PLANE_TL fields") and 27b047bb ("staging: sm750fb: change definition of PANEL_PLANE_BR fields") modify the register bit fields definitions. But the modifications are wrong, because the bit mask of "bit field 10:0" is not 0xeff, but 0x7ff. The wrong definition bugs makes display very strange. Signed-off-by: Huacai Chen <chenhc@lemote.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Arnd Bergmann authored
commit 34eee70a upstream. The ad5933_i2c_read function returns an error code to indicate whether it could read data or not. However ad5933_work() ignores this return code and just accesses the data unconditionally, which gets detected by gcc as a possible bug: drivers/staging/iio/impedance-analyzer/ad5933.c: In function 'ad5933_work': drivers/staging/iio/impedance-analyzer/ad5933.c:649:16: warning: 'status' may be used uninitialized in this function [-Wmaybe-uninitialized] This adds minimal error handling so we only evaluate the data if it was correctly read. Link: https://patchwork.kernel.org/patch/8110281/Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Jonathan Cameron <jic23@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ulf Hansson authored
commit fe1b5700 upstream. In the eMMC 4.51 version of the spec, an EXT_CSD field called GENERIC_CMD6_TIME[248] was added. This allows cards to specify the maximum time it may need to move out from its busy state, when a CMD6 command has been sent. In cases when the card is compliant to versions < 4.51 of the eMMC spec, obviously the core needs to use a fall-back value for this timeout, which currently is set to 10 minutes. This value is completely in the wrong range and importantly in some cases it causes a card initialization to take more than 10 minute to complete. Earlier this scenario was avoided as the mmc core used CMD13 to poll the card, to find out when it stopped signaling busy. Commit 08573eaf ("mmc: mmc: do not use CMD13 to get status after speed mode switch") changed this behavior. Instead of reverting that commit, which would cause other issues, let's instead start by picking a simple solution for the problem, by using a 500ms default generic CMD6 timeout. The reason for using exactly 500ms, comes from observations that shows it's quite common for cards to specify 250ms. 500ms is two times that value so likely it should be enough for most cards. Fixes: 08573eaf ("mmc: mmc: do not use CMD13 to get status after speed mode switch") Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Tested-by: Stephen Boyd <sboyd@codeaurora.org> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Adrian Hunter authored
commit 69b962a6 upstream. In the busy response case (i.e. !host->data), an unexpected data interrupt would result in clearing the data command as though it had completed but without informing the upper layers and thus resulting in a hang. Fix by only clearing the data command for data interrupts that are expected. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Adrian Hunter authored
commit 6ebebeab upstream. CMD line reset during an ongoing data transfer can cause the data transfer to hang. Fix by delaying the reset until the data transfer is finished. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Laura Abbott authored
commit c25badc9 upstream. When converting to a shared library in ac5a181d ("cpupower: Add cpuidle parts into library"), cpu_freq_cpu_exists() was converted to cpupower_is_cpu_online(). cpu_req_cpu_exists() returned 0 on success and -ENOSYS on failure whereas cpupower_is_cpu_online returns 1 on success. Check for the correct return value in cpufreq-set. Link: https://bugzilla.redhat.com/show_bug.cgi?id=1374212 Fixes: ac5a181d (cpupower: Add cpuidle parts into library) Reported-by: Julian Seward <jseward@acm.org> Signed-off-by: Laura Abbott <labbott@redhat.com> Acked-by: Thomas Renninger <trenn@suse.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mika Westerberg authored
commit d2cdf5dc upstream. When the system is suspended to S3 the BIOS might re-initialize certain GPIO pins back to their original state or it may re-program interrupt mask of others. For example Acer TravelMate B116-M had BIOS bug where certain GPIO pin (MF_ISH_GPIO_5) was programmed to trigger on high level, and the pin state was high once the BIOS gave control to the OS on resume. This triggers lots of messages like: irq 117, desc: ffff88017a61e600, depth: 1, count: 0, unhandled: 0 ->handle_irq(): ffffffff8109b613, handle_bad_irq+0x0/0x1e0 ->irq_data.chip(): ffffffffa0020180, chv_pinctrl_exit+0x2d84/0x12 [pinctrl_cherryview] ->action(): (null) IRQ_NOPROBE set We reset the mask back to known state in chv_pinctrl_resume() but that is called only after device interrupts have already been enabled. Now, this particular issue was fixed by upgrading the BIOS to the latest (v1.23) but not everybody upgrades their BIOSes so we fix it up in the driver as well. Prevent the possible interrupt storm by moving suspend and resume hooks to be called at _noirq time instead. Since device interrupts are still disabled we can restore the mask back to known state before interrupt storm happens. Reported-by: Christian Steiner <christian.steiner@outlook.de> Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mika Westerberg authored
commit 56211121 upstream. If async suspend is enabled, the driver may access registers concurrently with another instance which may fail because of the bug in Cherryview GPIO hardware. Prevent this by taking the shared lock while accessing the hardware in suspend and resume hooks. Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexey Brodkin authored
commit a79a8121 upstream. We used to use generic implementation of dma_map_ops.mmap which is dma_common_mmap() but that only worked for simpler cached mappings when vaddr = paddr. If a driver requests uncached DMA buffer kernel maps it to virtual address so that MMU gets involved and page uncached status takes into account. In that case usage of dma_common_mmap() lead to mapping of vaddr to vaddr for user-space which is obviously wrong. For more detals please refer to verbose explanation here [1]. So here we implement our own version of mmap() which always deals with dma_addr and maps underlying memory to user-space properly (note that DMA buffer mapped to user-space is always uncached because there's no way to properly manage cache from user-space). [1] https://lkml.org/lkml/2016/10/26/973Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bjorn Helgaas authored
commit 16d917b1 upstream. If we're using a shadow copy of a PCI device ROM, the shadow copy is in RAM and the device never sees accesses to it and doesn't respond to it. We don't have to route the shadow range to the PCI device, and the device doesn't have to claim the range. Previously we treated the shadow copy as though it were the ROM BAR, and we failed to claim it because the region wasn't routed to the device: pci 0000:01:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] pci_bus 0000:01: Allocating resources pci 0000:01:00.0: can't claim BAR 6 [mem 0x000c0000-0x000dffff]: no compatible bridge window The failure path of pcibios_allocate_dev_rom_resource() cleared out the resource start address, which also caused the following ioremap() warning: WARNING: CPU: 0 PID: 116 at /build/linux-akdJXO/linux-4.8.0/arch/x86/mm/ioremap.c:121 __ioremap_caller+0x1ec/0x370 ioremap on RAM at 0x0000000000000000 - 0x000000000001ffff Handle an option ROM shadow copy as RAM, without trying to insert it into the iomem resource tree. This fixes a regression caused by 0c0e0736 ("PCI: Set ROM shadow location in arch code, not in PCI core"), which appeared in v4.6. The regression causes video device initialization to fail. This was reported on AMD Turks, but it likely affects others as well. Fixes: 0c0e0736 ("PCI: Set ROM shadow location in arch code, not in PCI core") Reported-and-tested-by: Vecu Bosseur <vecu.bosseur@gmail.com> Link: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1627496 Link: https://bugzilla.kernel.org/show_bug.cgi?id=175391 Link: https://bugzilla.redhat.com/show_bug.cgi?id=1352272Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vineet Gupta authored
commit 922cc171 upstream. The current code doesn't even compile as somehow the inline assembly can't see the register names defined as ARC_RTC_* I'm pretty sure It worked when I first got it merged, but the tools were definitely different then. So better to write this in "C" anyways. Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michael Holzheu authored
commit 237d6e68 upstream. Since commit d86bd1be ("mm/slub: support left redzone") it is no longer guaranteed that kmalloc(PAGE_SIZE) returns page aligned memory. After the above commit we get an error for diag224 because aligned memory is required. This leads to the following user visible error: # mount none -t s390_hypfs /sys/hypervisor/ mount: unknown filesystem type 's390_hypfs' # dmesg | grep hypfs hypfs.cccfb8: The hardware system does not provide all functions required by hypfs hypfs.7a79f0: Initialization of hypfs failed with rc=-61 Fix this problem and use get_free_page() instead of kmalloc() to get correctly aligned memory. Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Andrey Ryabinin authored
commit 70d78fe7 upstream. It could be not possible to freeze coredumping task when it waits for 'core_state->startup' completion, because threads are frozen in get_signal() before they got a chance to complete 'core_state->startup'. Inability to freeze a task during suspend will cause suspend to fail. Also CRIU uses cgroup freezer during dump operation. So with an unfreezable task the CRIU dump will fail because it waits for a transition from 'FREEZING' to 'FROZEN' state which will never happen. Use freezer_do_not_count() to tell freezer to ignore coredumping task while it waits for core_state->startup completion. Link: http://lkml.kernel.org/r/1475225434-3753-1-git-send-email-aryabinin@virtuozzo.comSigned-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Acked-by: Pavel Machek <pavel@ucw.cz> Acked-by: Oleg Nesterov <oleg@redhat.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Tejun Heo <tj@kernel.org> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Michal Hocko <mhocko@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mike Kravetz authored
commit 96b96a96 upstream. Error paths in hugetlb_cow() and hugetlb_no_page() may free a newly allocated huge page. If a reservation was associated with the huge page, alloc_huge_page() consumed the reservation while allocating. When the newly allocated page is freed in free_huge_page(), it will increment the global reservation count. However, the reservation entry in the reserve map will remain. This is not an issue for shared mappings as the entry in the reserve map indicates a reservation exists. But, an entry in a private mapping reserve map indicates the reservation was consumed and no longer exists. This results in an inconsistency between the reserve map and the global reservation count. This 'leaks' a reserved huge page. Create a new routine restore_reserve_on_error() to restore the reserve entry in these specific error paths. This routine makes use of a new function vma_add_reservation() which will add a reserve entry for a specific address/page. In general, these error paths were rarely (if ever) taken on most architectures. However, powerpc contained arch specific code that that resulted in an extra fault and execution of these error paths on all private mappings. Fixes: 67961f9d ("mm/hugetlb: fix huge page reserve accounting for private mappings) Link: http://lkml.kernel.org/r/1476933077-23091-2-git-send-email-mike.kravetz@oracle.comSigned-off-by: Mike Kravetz <mike.kravetz@oracle.com> Reported-by: Jan Stancek <jstancek@redhat.com> Tested-by: Jan Stancek <jstancek@redhat.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Kirill A . Shutemov <kirill.shutemov@linux.intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Naoya Horiguchi authored
commit c3901e72 upstream. When memory_failure() runs on a thp tail page after pmd is split, we trigger the following VM_BUG_ON_PAGE(): page:ffffd7cd819b0040 count:0 mapcount:0 mapping: (null) index:0x1 flags: 0x1fffc000400000(hwpoison) page dumped because: VM_BUG_ON_PAGE(!page_count(p)) ------------[ cut here ]------------ kernel BUG at /src/linux-dev/mm/memory-failure.c:1132! memory_failure() passed refcount and page lock from tail page to head page, which is not needed because we can pass any subpage to split_huge_page(). Fixes: 61f5d698 ("mm: re-enable THP") Link: http://lkml.kernel.org/r/1477961577-7183-1-git-send-email-n-horiguchi@ah.jp.nec.comSigned-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jann Horn authored
commit dd111be6 upstream. When root activates a swap partition whose header has the wrong endianness, nr_badpages elements of badpages are swabbed before nr_badpages has been checked, leading to a buffer overrun of up to 8GB. This normally is not a security issue because it can only be exploited by root (more specifically, a process with CAP_SYS_ADMIN or the ability to modify a swap file/partition), and such a process can already e.g. modify swapped-out memory of any other userspace process on the system. Link: http://lkml.kernel.org/r/1477949533-2509-1-git-send-email-jann@thejh.netSigned-off-by: Jann Horn <jann@thejh.net> Acked-by: Kees Cook <keescook@chromium.org> Acked-by: Jerome Marchand <jmarchan@redhat.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Hugh Dickins <hughd@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hugh Dickins authored
commit 9956edf3 upstream. If shmem_alloc_page() does not set PageLocked and PageSwapBacked, then shmem_replace_page() needs to do so for itself. Without this, it puts newpage on the wrong lru, re-unlocks the unlocked newpage, and system descends into "Bad page" reports and freeze; or if CONFIG_DEBUG_VM=y, it hits an earlier VM_BUG_ON_PAGE(!PageLocked), depending on config. But shmem_replace_page() is not a common path: it's only called when swapin (or swapoff) finds the page was already read into an unsuitable zone: usually all zones are suitable, but gem objects for a few drm devices (gma500, omapdrm, crestline, broadwater) require zone DMA32 if there's more than 4GB of ram. Fixes: 800d8c63 ("shmem: add huge pages support") Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1611062003510.11253@eggly.anvilsSigned-off-by: Hugh Dickins <hughd@google.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vlastimil Babka authored
commit 5e322bee upstream. Christian Borntraeger reports: With commit 8ea1d2a1 ("mm, frontswap: convert frontswap_enabled to static key") kmemleak complains about a memory leak in swapon unreferenced object 0x3e09ba56000 (size 32112640): comm "swapon", pid 7852, jiffies 4294968787 (age 1490.770s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: __vmalloc_node_range+0x194/0x2d8 vzalloc+0x58/0x68 SyS_swapon+0xd60/0x12f8 system_call+0xd6/0x270 Turns out kmemleak is right. We now allocate the frontswap map depending on the kernel config (and no longer on the enablement) swapfile.c: [...] if (IS_ENABLED(CONFIG_FRONTSWAP)) frontswap_map = vzalloc(BITS_TO_LONGS(maxpages) * sizeof(long)); but later on this is passed along --> enable_swap_info(p, prio, swap_map, cluster_info, frontswap_map); and ignored if frontswap is disabled --> frontswap_init(p->type, frontswap_map); static inline void frontswap_init(unsigned type, unsigned long *map) { if (frontswap_enabled()) __frontswap_init(type, map); } Thing is, that frontswap map is never freed. The leakage is relatively not that bad, because swapon is an infrequent and privileged operation. However, if the first frontswap backend is registered after a swap type has been already enabled, it will WARN_ON in frontswap_register_ops() and frontswap will not be available for the swap type. Fix this by making sure the map is assigned by frontswap_init() as long as CONFIG_FRONTSWAP is enabled. Fixes: 8ea1d2a1 ("mm, frontswap: convert frontswap_enabled to static key") Link: http://lkml.kernel.org/r/20161026134220.2566-1-vbabka@suse.czSigned-off-by: Vlastimil Babka <vbabka@suse.cz> Reported-by: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: David Vrabel <david.vrabel@citrix.com> Cc: Juergen Gross <jgross@suse.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sean Young authored
commit ba13e98f upstream. When receiving a nec repeat, ensure the correct scancode is repeated rather than a random value from the stack. This removes the need for the bogus uninitialized_var() and also fixes the warnings: drivers/media/usb/dvb-usb/dib0700_core.c: In function ‘dib0700_rc_urb_completion’: drivers/media/usb/dvb-usb/dib0700_core.c:679: warning: ‘protocol’ may be used uninitialized in this function [sean addon: So after writing the patch and submitting it, I've bought the hardware on ebay. Without this patch you get random scancodes on nec repeats, which the patch indeed fixes.] Signed-off-by: Sean Young <sean@mess.org> Tested-by: Sean Young <sean@mess.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
murray foster authored
commit aa5f9209 upstream. Mismatching stream names in DAPM route and widget definitions are causing compilation errors. Fixing these names allows the cs4270 driver to compile and function. [Errors must be at probe time not compile time -- broonie] Signed-off-by: Murray Foster <mrafoster@gmail.com> Acked-by: Paul Handrigan <Paul.Handrigan@cirrus.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit 027a9fe6 upstream. The ALSA proc handler allows currently the write in the unlimited size until kmalloc() fails. But basically the write is supposed to be only for small inputs, mostly for one line inputs, and we don't have to handle too large sizes at all. Since the kmalloc error results in the kernel warning, it's better to limit the size beforehand. This patch adds the limit of 16kB, which must be large enough for the currently existing code. Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit 6809cd68 upstream. Currently the ALSA proc handler allows read or write even if the proc file were write-only or read-only. It's mostly harmless, does thing but allocating memory and ignores the input/output. But it doesn't tell user about the invalid use, and it's confusing and inconsistent in comparison with other proc files. This patch adds some sanity checks and let the proc handler returning an -EIO error when the invalid read/write is performed. Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 15 Nov, 2016 11 commits
-
-
Greg Kroah-Hartman authored
-
Sumit Saxena authored
commit 5e5ec175 upstream. This patch will fix regression caused by commit 1e793f6f ("scsi: megaraid_sas: Fix data integrity failure for JBOD (passthrough) devices"). The problem was that the MEGASAS_IS_LOGICAL macro did not have braces and as a result the driver ended up exposing a lot of non-existing SCSI devices (all SCSI commands to channels 1,2,3 were returned as SUCCESS-DID_OK by driver). [mkp: clarified patch description] Fixes: 1e793f6fReported-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com> Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Tested-by: Sumit Saxena <sumit.saxena@broadcom.com> Reviewed-by: Tomas Henzl <thenzl@redhat.com> Tested-by: Jens Axboe <axboe@fb.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kashyap Desai authored
commit 1e793f6f upstream. Commit 02b01e01 ("megaraid_sas: return sync cache call with success") modified the driver to successfully complete SYNCHRONIZE_CACHE commands without passing them to the controller. Disk drive caches are only explicitly managed by controller firmware when operating in RAID mode. So this commit effectively disabled writeback cache flushing for any drives used in JBOD mode, leading to data integrity failures. [mkp: clarified patch description] Fixes: 02b01e01Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com> Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Reviewed-by: Tomas Henzl <thenzl@redhat.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Reviewed-by: Ewan D. Milne <emilne@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Felipe Balbi authored
commit a9c3ca5f upstream. Some requests could be accounted for multiple times. Let's fix that so each and every requests is accounted for only once. Fixes: 55a0237f ("usb: dwc3: gadget: use allocated/queued reqs for LST bit") Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ivan Vecera authored
[ Upstream commit f9d4286b ] Commit 01cfbad7 "ipv4: Update parameters for csum_tcpudp_magic to their original types" changed parameters for csum_tcpudp_magic and csum_tcpudp_nofold for many platforms but not for PowerPC. Fixes: 01cfbad7 "ipv4: Update parameters for csum_tcpudp_magic to their original types" Cc: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> Acked-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Willem de Bruijn authored
[ Upstream commit 104ba78c ] When transmitting on a packet socket with PACKET_VNET_HDR and PACKET_QDISC_BYPASS, validate device support for features requested in vnet_hdr. Drop TSO packets sent to devices that do not support TSO or have the feature disabled. Note that the latter currently do process those packets correctly, regardless of not advertising the feature. Because of SKB_GSO_DODGY, it is not sufficient to test device features with netif_needs_gso. Full validate_xmit_skb is needed. Switch to software checksum for non-TSO packets that request checksum offload if that device feature is unsupported or disabled. Note that similar to the TSO case, device drivers may perform checksum offload correctly even when not advertising it. When switching to software checksum, packets hit skb_checksum_help, which has two BUG_ON checksum not in linear segment. Packet sockets always allocate at least up to csum_start + csum_off + 2 as linear. Tested by running github.com/wdebruij/kerneltools/psock_txring_vnet.c ethtool -K eth0 tso off tx on psock_txring_vnet -d $dst -s $src -i eth0 -l 2000 -n 1 -q -v psock_txring_vnet -d $dst -s $src -i eth0 -l 2000 -n 1 -q -v -N ethtool -K eth0 tx off psock_txring_vnet -d $dst -s $src -i eth0 -l 1000 -n 1 -q -v -G psock_txring_vnet -d $dst -s $src -i eth0 -l 1000 -n 1 -q -v -G -N v2: - add EXPORT_SYMBOL_GPL(validate_xmit_skb_list) Fixes: d346a3fa ("packet: introduce PACKET_QDISC_BYPASS socket option") Signed-off-by: Willem de Bruijn <willemb@google.com> Acked-by: Eric Dumazet <edumazet@google.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eli Cooper authored
[ Upstream commit ae148b08 ] This patch updates skb->protocol to ETH_P_IPV6 in ip6_tnl_xmit() when an IPv6 header is installed to a socket buffer. This is not a cosmetic change. Without updating this value, GSO packets transmitted through an ipip6 tunnel have the protocol of ETH_P_IP and skb_mac_gso_segment() will attempt to call gso_segment() for IPv4, which results in the packets being dropped. Fixes: b8921ca8 ("ip4ip6: Support for GSO/GRO") Signed-off-by: Eli Cooper <elicooper@gmx.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Marcelo Ricardo Leitner authored
[ Upstream commit bf911e98 ] Andrey Konovalov reported that KASAN detected that SCTP was using a slab beyond the boundaries. It was caused because when handling out of the blue packets in function sctp_sf_ootb() it was checking the chunk len only after already processing the first chunk, validating only for the 2nd and subsequent ones. The fix is to just move the check upwards so it's also validated for the 1st chunk. Reported-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Reviewed-by: Xin Long <lucien.xin@gmail.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jamal Hadi Salim authored
[ Upstream commit 9ee78374 ] Daniel says: While trying out [1][2], I noticed that tc monitor doesn't show the correct handle on delete: $ tc monitor qdisc clsact ffff: dev eno1 parent ffff:fff1 filter dev eno1 ingress protocol all pref 49152 bpf handle 0x2a [...] deleted filter dev eno1 ingress protocol all pref 49152 bpf handle 0xf3be0c80 some context to explain the above: The user identity of any tc filter is represented by a 32-bit identifier encoded in tcm->tcm_handle. Example 0x2a in the bpf filter above. A user wishing to delete, get or even modify a specific filter uses this handle to reference it. Every classifier is free to provide its own semantics for the 32 bit handle. Example: classifiers like u32 use schemes like 800:1:801 to describe the semantics of their filters represented as hash table, bucket and node ids etc. Classifiers also have internal per-filter representation which is different from this externally visible identity. Most classifiers set this internal representation to be a pointer address (which allows fast retrieval of said filters in their implementations). This internal representation is referenced with the "fh" variable in the kernel control code. When a user successfuly deletes a specific filter, by specifying the correct tcm->tcm_handle, an event is generated to user space which indicates which specific filter was deleted. Before this patch, the "fh" value was sent to user space as the identity. As an example what is shown in the sample bpf filter delete event above is 0xf3be0c80. This is infact a 32-bit truncation of 0xffff8807f3be0c80 which happens to be a 64-bit memory address of the internal filter representation (address of the corresponding filter's struct cls_bpf_prog); After this patch the appropriate user identifiable handle as encoded in the originating request tcm->tcm_handle is generated in the event. One of the cardinal rules of netlink rules is to be able to take an event (such as a delete in this case) and reflect it back to the kernel and successfully delete the filter. This patch achieves that. Note, this issue has existed since the original TC action infrastructure code patch back in 2004 as found in: https://git.kernel.org/cgit/linux/kernel/git/history/history.git/commit/ [1] http://patchwork.ozlabs.org/patch/682828/ [2] http://patchwork.ozlabs.org/patch/682829/ Fixes: 4e54c481 ("[NET]: Add tc extensions infrastructure.") Reported-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
David Ahern authored
[ Upstream commit d5d32e4b ] Similar to IPv4, do not consider link state when validating next hops. Currently, if the link is down default routes can fail to insert: $ ip -6 ro add vrf blue default via 2100:2::64 dev eth2 RTNETLINK answers: No route to host With this patch the command succeeds. Fixes: 8c14586f ("net: ipv6: Use passed in table for nexthop lookups") Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tobias Brunner authored
[ Upstream commit e0f841f5 ] Even if sending SCIs is explicitly disabled, the code that creates the Security Tag might still decide to add it (e.g. if multiple RX SCs are defined on the MACsec interface). But because the header length so far only depended on the configuration option the SCI overwrote the original frame's contents (EtherType and e.g. the beginning of the IP header) and if encrypted did not visibly end up in the packet, while the SC flag in the TCI field of the Security Tag was still set, resulting in invalid MACsec frames. Fixes: c09440f7 ("macsec: introduce IEEE 802.1AE driver") Signed-off-by: Tobias Brunner <tobias@strongswan.org> Acked-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-