- 07 Mar, 2014 40 commits
-
-
Hannes Reinecke authored
commit a1989b33 upstream. An invalid ioctl will never be valid, irrespective of whether multipath has active paths or not. So for invalid ioctls we do not have to wait for multipath to activate any paths, but can rather return an error code immediately. This fix resolves numerous instances of: udevd[]: worker [] unexpectedly returned with status 0x0100 that have been seen during testing. Signed-off-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Linus Walleij authored
commit e9baa9d9 upstream. It appears that in the DMA40 driver the DMA tasklet will very often dereference memory for a descriptor just free:d from the DMA40 slab. Nothing happens because no other part of the driver has yet had a chance to claim this memory, but it's really nasty to dereference free:d memory, so let's check the flag before the descriptor is free and store it in a bool variable. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jean Delvare authored
commit 75135da0 upstream. pci_get_device() decrements the reference count of "from" (last argument) so when we break off the loop successfully we have only one device reference - and we don't know which device we have. If we want a reference to each device, we must take them explicitly and let the pci_get_device() walk complete to avoid duplicate references. This is serious, as over-putting device references will cause the device to eventually disappear. Without this fix, the kernel crashes after a few insmod/rmmod cycles. Tested on an Intel S7000FC4UR system with a 7300 chipset. Signed-off-by: Jean Delvare <jdelvare@suse.de> Link: http://lkml.kernel.org/r/20140224111656.09bbb7ed@endymion.delvare Cc: Mauro Carvalho Chehab <m.chehab@samsung.com> Cc: Doug Thompson <dougthompson@xmission.com> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dr. Greg Wettstein authored
commit 6f58c780 upstream. A selective retransmission request (SRR) is a fibre-channel protocol control request which provides support for requesting retransmission of a data sequence in response to an issue such as frame loss or corruption. These events are experienced infrequently in fibre-channel based networks which makes it difficult to test and assess codepaths which handle these events. We were fortunate enough, for some definition of fortunate, to have a metro-area single-mode SAN link which, at 10 GBPS sustained load levels, would consistently generate SRR's in a SCST based target implementation using our SCST/in-kernel Qlogic target interface driver. In response to an SRR the in-kernel Qlogic target driver immediately panics resulting in a catastrophic storage failure for serviced initiators. The culprit was a debug statement in the qla_target.c file which does not verify that a pointer to the SCSI CDB is not null. The unchecked pointer dereference results in the kernel panic and resultant system failure. The other two references to the SCSI CDB by the SRR handling code use a ternary operator to verify a non-null pointer is being acted on. This patch simply adds a similar test to the implicated debug statement. This patch is a candidate for any stable kernel being maintained since it addresses a potentially catastrophic event with minimal downside. Signed-off-by: Dr. Greg Wettstein <greg@enjellic.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Will Deacon authored
commit 00efaa02 upstream. Commit 15e7e5c1 ("ARM: 7749/1: spinlock: retry trylock operation if strex fails on free lock") modifying our arch_spin_trylock to retry the acquisition if the lock appeared uncontended, but the strex failed. This patch does the same for rwlocks, which were missed by the original patch. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Li Zefan <lizefan@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Will Deacon authored
commit 15e7e5c1 upstream. An exclusive store instruction may fail for reasons other than lock contention (e.g. a cache eviction during the critical section) so, in line with other architectures using similar exclusive instructions (alpha, mips, powerpc), retry the trylock operation if the lock appears to be free but the strex reported failure. Reported-by: Tony Thompson <anthony.thompson@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Li Zefan <lizefan@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stephen Warren authored
commit 88596857 upstream. Fix tegra_init_cache() to check whether the system has a PL310 cache before touching the PL310 registers. This prevents access to non-existent registers on Tegra114 and later. Note for stable kernels: In <= v3.12, the file to patch is arch/arm/mach-tegra/common.c. Signed-off-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Olof Johansson authored
commit e306dfd0 upstream. The frame PC value in the unwind code used to just take the saved LR value and use that. That's incorrect as a stack trace, since it shows the return path stack, not the call path stack. In particular, it shows faulty information in case the bl is done as the very last instruction of one label, since the return point will be in the next label. That can easily be seen with tail calls to panic(), which is marked __noreturn and thus doesn't have anything useful after it. Easiest here is to just correct the unwind code and do a -4, to get the actual call site for the backtrace instead of the return site. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
James Hogan authored
commit f229006e upstream. Fix irq_set_affinity callbacks in the Meta IRQ chip drivers to AND cpu_online_mask into the cpumask when picking a CPU to vector the interrupt to. As Thomas pointed out, the /proc/irq/$N/smp_affinity interface doesn't filter out offline CPUs, so without this patch if you offline CPU0 and set an IRQ affinity to 0x3 it vectors the interrupt onto CPU0 even though it is offline. Reported-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-metag@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Charles Keepax authored
commit c4204960 upstream. snd_soc_dapm_sync takes the dapm_mutex internally, but we currently take it externally as well. This patch fixes this. Signed-off-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Davidlohr Bueso authored
commit f3713fd9 upstream. Commit 93e6f119 ("ipc/mqueue: cleanup definition names and locations") added global hardcoded limits to the amount of message queues that can be created. While these limits are per-namespace, reality is that it ends up breaking userspace applications. Historically users have, at least in theory, been able to create up to INT_MAX queues, and limiting it to just 1024 is way too low and dramatic for some workloads and use cases. For instance, Madars reports: "This update imposes bad limits on our multi-process application. As our app uses approaches that each process opens its own set of queues (usually something about 3-5 queues per process). In some scenarios we might run up to 3000 processes or more (which of-course for linux is not a problem). Thus we might need up to 9000 queues or more. All processes run under one user." Other affected users can be found in launchpad bug #1155695: https://bugs.launchpad.net/ubuntu/+source/manpages/+bug/1155695 Instead of increasing this limit, revert it entirely and fallback to the original way of dealing queue limits -- where once a user's resource limit is reached, and all memory is used, new queues cannot be created. Signed-off-by: Davidlohr Bueso <davidlohr@hp.com> Reported-by: Madars Vitolins <m@silodev.com> Acked-by: Doug Ledford <dledford@redhat.com> Cc: Manfred Spraul <manfred@colorfullife.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Kara authored
commit 1362f4ea upstream. Currently last dqput() can race with dquot_scan_active() causing it to call callback for an already deactivated dquot. The race is as follows: CPU1 CPU2 dqput() spin_lock(&dq_list_lock); if (atomic_read(&dquot->dq_count) > 1) { - not taken if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) { spin_unlock(&dq_list_lock); ->release_dquot(dquot); if (atomic_read(&dquot->dq_count) > 1) - not taken dquot_scan_active() spin_lock(&dq_list_lock); if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) - not taken atomic_inc(&dquot->dq_count); spin_unlock(&dq_list_lock); - proceeds to release dquot ret = fn(dquot, priv); - called for inactive dquot Fix the problem by making sure possible ->release_dquot() is finished by the time we call the callback and new calls to it will notice reference dquot_scan_active() has taken and bail out. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Paris authored
commit 9085a642 upstream. When writing policy via /sys/fs/selinux/policy I wrote the type and class of filename trans rules in CPU endian instead of little endian. On x86_64 this works just fine, but it means that on big endian arch's like ppc64 and s390 userspace reads the policy and converts it from le32_to_cpu. So the values are all screwed up. Write the values in le format like it should have been to start. Signed-off-by: Eric Paris <eparis@redhat.com> Acked-by: Stephen Smalley <sds@tycho.nsa.gov> Signed-off-by: Paul Moore <pmoore@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Max Filippov authored
commit e2fd1374 upstream. Most in-kernel users want registers spilled on the kernel stack and don't require PS.EXCM to be set. That means that they don't need fixup routine and could reuse regular window overflow mechanism for that, which makes spill routine very simple. Suggested-by: Chris Zankel <chris@zankel.net> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit 37c367ec upstream. HP Folio 13 may have a broken BIOS that doesn't set up the mute LED GPIO properly, and the driver guesses it wrongly, too. Add a new fixup entry for setting the GPIO pin statically for this laptop. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=70991Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Peter Zijlstra authored
commit e3703f8c upstream. Drew Richardson reported that he could make the kernel go *boom* when hotplugging while having perf events active. It turned out that when you have a group event, the code in __perf_event_exit_context() fails to remove the group siblings from the context. We then proceed with destroying and freeing the event, and when you re-plug the CPU and try and add another event to that CPU, things go *boom* because you've still got dead entries there. Reported-by: Drew Richardson <drew.richardson@arm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/n/tip-k6v5wundvusvcseqj1si0oz0@git.kernel.orgSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Denis CIOCCA authored
commit a0657716 upstream. The driver was not able to manage the sensor: during probe function and wai check, the driver stops and writes: "device name and WhoAmI mismatch." The correct value of L3GD20H wai is 0xd7 instead of 0xd4. Dropped support for the sensor. Signed-off-by: Denis Ciocca <denis.ciocca@st.com> Signed-off-by: Jonathan Cameron <jic23@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Arve Hjønnevåg authored
commit e194fd8a upstream. The change (008fa749) that moved the node release code to a separate function broke death notifications in some cases. When it encountered a reference without a death notification request, it would skip looking at the remaining references, and therefore fail to send death notifications for them. Cc: Colin Cross <ccross@android.com> Cc: Android Kernel Team <kernel-team@android.com> Signed-off-by: Arve Hjønnevåg <arve@android.com> Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Jeremy Compostella <jeremy.compostella@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lai Jiangshan authored
commit 5bdfff96 upstream. When a kworker should die, the kworkre is notified through WORKER_DIE flag instead of kthread_should_stop(). This, IIRC, is primarily to keep the test synchronized inside worker_pool lock. WORKER_DIE is first set while holding pool->lock, the lock is dropped and kthread_stop() is called. Unfortunately, this means that there's a slight chance that the target kworker may see WORKER_DIE before kthread_stop() finishes and exits and frees the target task before or during kthread_stop(). Fix it by pinning the target task before setting WORKER_DIE and putting it after kthread_stop() is done. tj: Improved patch description and comment. Moved pinning above WORKER_DIE for better signify what it's protecting. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guenter Roeck authored
commit 500a9157 upstream. When trying to set the minimum temperature, the driver was erroneously writing the maximum temperature into the chip. Signed-off-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Jean Delvare <jdelvare@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Chao Bi authored
commit accb884b upstream. In mei_cl_read_start(), if it fails to send flow control request, it will release "cl->read_cb" but forget to set pointer to NULL, leaving "cl->read_cb" still pointing to random memory, next time this client is operated like mei_release(), it has chance to refer to this wrong pointer. Fixes: PANIC at kfree in mei_release() [228781.826904] Call Trace: [228781.829737] [<c16249b8>] ? mei_cl_unlink+0x48/0xa0 [228781.835283] [<c1624487>] mei_io_cb_free+0x17/0x30 [228781.840733] [<c16265d8>] mei_release+0xa8/0x180 [228781.845989] [<c135c610>] ? __fsnotify_parent+0xa0/0xf0 [228781.851925] [<c1325a69>] __fput+0xd9/0x200 [228781.856696] [<c1325b9d>] ____fput+0xd/0x10 [228781.861467] [<c125cae1>] task_work_run+0x81/0xb0 [228781.866821] [<c1242e53>] do_exit+0x283/0xa00 [228781.871786] [<c1a82b36>] ? kprobe_flush_task+0x66/0xc0 [228781.877722] [<c124eeb8>] ? __dequeue_signal+0x18/0x1a0 [228781.883657] [<c124f072>] ? dequeue_signal+0x32/0x190 [228781.889397] [<c1243744>] do_group_exit+0x34/0xa0 [228781.894750] [<c12517b6>] get_signal_to_deliver+0x206/0x610 [228781.901075] [<c12018d8>] do_signal+0x38/0x100 [228781.906136] [<c1626d1c>] ? mei_read+0x42c/0x4e0 [228781.911393] [<c12600a0>] ? wake_up_bit+0x30/0x30 [228781.916745] [<c16268f0>] ? mei_poll+0x120/0x120 [228781.922001] [<c1324be9>] ? vfs_read+0x89/0x160 [228781.927158] [<c16268f0>] ? mei_poll+0x120/0x120 [228781.932414] [<c133ca34>] ? fget_light+0x44/0xe0 [228781.937670] [<c1324e58>] ? SyS_read+0x68/0x80 [228781.942730] [<c12019f5>] do_notify_resume+0x55/0x70 [228781.948376] [<c1a7de5d>] work_notifysig+0x29/0x30 [228781.953827] [<c1a70000>] ? bad_area+0x5/0x3e Signed-off-by: Chao Bi <chao.bi@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Joerg Dorchain authored
commit 6dbd46c8 upstream. Hello, the following patch adds an entry for the PID of a Cressi Leonardo diving computer interface to kernel 3.13.0. It is detected as FT232RL. Works with subsurface. Signed-off-by: Joerg Dorchain <joerg@dorchain.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stanislaw Gruszka authored
commit a1227f3c upstream. ehci_irq() and ehci_hrtimer_func() can deadlock on ehci->lock when threadirqs option is used. To prevent the deadlock use spin_lock_irqsave() in ehci_irq(). This change can be reverted when hrtimer callbacks become threaded. Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Acked-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aleksander Morgado authored
commit 12df84d4 upstream. This interface is to be handled by the qmi_wwan driver. CC: Hans-Christoph Schemmel <hans-christoph.schemmel@gemalto.com> CC: Christian Schmiedl <christian.schmiedl@gemalto.com> CC: Nicolaus Colberg <nicolaus.colberg@gemalto.com> CC: David McCullough <david.mccullough@accelecon.com> Signed-off-by: Aleksander Morgado <aleksander@aleksander.es> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Florian Fainelli authored
commit 2d1f7af3 upstream. Commit 3dc6475c ("bcm63xx_enet: add support Broadcom BCM6345 Ethernet") changed the ENETDMA[CS] macros such that they are no longer macros, but actual register offset definitions. The bcm63xx_udc driver was not updated, and as a result, causes the following build error to pop up: CC drivers/usb/gadget/u_ether.o drivers/usb/gadget/bcm63xx_udc.c: In function 'iudma_write': drivers/usb/gadget/bcm63xx_udc.c:642:24: error: called object '0' is not a function drivers/usb/gadget/bcm63xx_udc.c: In function 'iudma_reset_channel': drivers/usb/gadget/bcm63xx_udc.c:698:46: error: called object '0' is not a function drivers/usb/gadget/bcm63xx_udc.c:700:49: error: called object '0' is not a function Fix this by updating usb_dmac_{read,write}l and usb_dmas_{read,write}l to take an extra channel argument, and use the channel width (ENETDMA_CHAN_WIDTH) to offset the register we want to access, hence doing again what the macro implicitely did for us. Cc: Kevin Cernekee <cernekee@gmail.com> Cc: Jonas Gorski <jogo@openwrt.org> Signed-off-by: Florian Fainelli <florian@openwrt.org> Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Matthieu CASTET authored
commit 5bf5dbed upstream. ENDPTFLUSH and ENDPTPRIME registers are set by software and clear by hardware. There is a bit for each endpoint. When we are setting a bit for an endpoint we should make sure we do not touch other endpoint bit. There is a race condition if the hardware clear the bit between the read and the write in hw_write. Signed-off-by: Peter Chen <peter.chen@freescale.com> Signed-off-by: Matthieu CASTET <matthieu.castet@parrot.com> Tested-by: Michael Grzeschik <mgrzeschik@pengutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Olivier Sobrie authored
commit 862474f8 upstream. It is needed to check the number of channels returned by the HW because it cannot be greater than MAX_NET_DEVICES otherwise it will crash. Signed-off-by: Olivier Sobrie <olivier@sobrie.be> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lan Tianyu authored
commit f3ca4164 upstream. acpi_processor_set_throttling() uses set_cpus_allowed_ptr() to make sure that the (struct acpi_processor)->acpi_processor_set_throttling() callback will run on the right CPU. However, the function may be called from a worker thread already bound to a different CPU in which case that won't work. Make acpi_processor_set_throttling() use work_on_cpu() as appropriate instead of abusing set_cpus_allowed_ptr(). Reported-and-tested-by: Jiri Olsa <jolsa@redhat.com> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com> [rjw: Changelog] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hans de Goede authored
commit bd8ba205 upstream. Some devices have duplicate entries in there brightness levels table, ie on my Dell Latitude E6430 the table looks like this: [ 3.686060] acpi backlight index 0, val 80 [ 3.686095] acpi backlight index 1, val 50 [ 3.686122] acpi backlight index 2, val 5 [ 3.686147] acpi backlight index 3, val 5 [ 3.686172] acpi backlight index 4, val 5 [ 3.686197] acpi backlight index 5, val 5 [ 3.686223] acpi backlight index 6, val 5 [ 3.686248] acpi backlight index 7, val 5 [ 3.686273] acpi backlight index 8, val 6 [ 3.686332] acpi backlight index 9, val 7 [ 3.686356] acpi backlight index 10, val 8 [ 3.686380] acpi backlight index 11, val 9 etc. Notice that brightness values 0-5 are all mapped to 5. This means that if userspace writes any value between 0 and 5 to the brightness sysfs attribute and then reads it, it will always return 0, which is somewhat unexpected. This is a problem for ie gnome-settings-daemon, which uses read-modify-write logic when the users presses the brightness up or down keys. This is done this way to take brightness changes from other sources into account. On this specific laptop what happens once the brightness has been set to 0, is that gsd reads 0, adds 5, writes 5, and on the next brightness up key press again reads 0, so things get stuck at the lowest brightness setting. Filtering out the duplicate table entries, makes any write to brightness read back as the written value as one would expect, fixing this. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Reviewed-by: Aaron Lu <aaron.lu@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jean Delvare authored
commit c0f5eeed upstream. The reference count changes done by pci_get_device can be a little misleading when the usage diverges from the most common scheme. The reference count of the device passed as the last parameter is always decreased, even if the function returns no new device. So if we are going to try alternative device IDs, we must manually increment the device reference count before each retry. If we don't, we end up decreasing the reference count, and after a few modprobe/rmmod cycles the PCI devices will vanish. In other words and as Alan put it: without this fix the EDAC code corrupts the PCI device list. This fixes kernel bug #50491: https://bugzilla.kernel.org/show_bug.cgi?id=50491Signed-off-by: Jean Delvare <jdelvare@suse.de> Link: http://lkml.kernel.org/r/20140224093927.7659dd9d@endymion.delvareReviewed-by: Alan Cox <alan@linux.intel.com> Cc: Mauro Carvalho Chehab <m.chehab@samsung.com> Cc: Doug Thompson <dougthompson@xmission.com> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tomasz Nowicki authored
commit b685f3b1 upstream. acpi_pci_link_allocate_irq() can return negative gsi even if entry != NULL. For that case we have a memory leak, so free entry before returning from acpi_pci_irq_enable() for gsi < 0. Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org> [rjw: Subject and changelog] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bjorn Helgaas authored
commit 1f42db78 upstream. Some firmware leaves the Interrupt Disable bit set even if the device uses INTx interrupts. Clear Interrupt Disable so we get those interrupts. Based on the report mentioned below, if the user selects the "EHCI only" option in the Intel Baytrail BIOS, the EHCI device is handed off to the OS with the PCI_COMMAND_INTX_DISABLE bit set. Link: http://lkml.kernel.org/r/20140114181721.GC12126@xanatos Link: https://bugzilla.kernel.org/show_bug.cgi?id=70601Reported-by: Chris Cheng <chris.cheng@atrustcorp.com> Reported-and-tested-by: Jamie Chen <jamie.chen@intel.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> CC: Sarah Sharp <sarah.a.sharp@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Srivatsa S. Bhat authored
commit c3274763 upstream. The powernow-k8 driver maintains a per-cpu data-structure called powernow_data that is used to perform the frequency transitions. It initializes this data structure only for the policy->cpu. So, accesses to this data structure by other CPUs results in various problems because they would have been uninitialized. Specifically, if a cpu (!= policy->cpu) invokes the drivers' ->get() function, it returns 0 as the KHz value, since its per-cpu memory doesn't point to anything valid. This causes problems during suspend/resume since cpufreq_update_policy() tries to enforce this (0 KHz) as the current frequency of the CPU, and this madness gets propagated to adjust_jiffies() as well. Eventually, lots of things start breaking down, including the r8169 ethernet card, in one particularly interesting case reported by Pierre Ossman. Fix this by initializing the per-cpu data-structures of all the CPUs in the policy appropriately. References: https://bugzilla.kernel.org/show_bug.cgi?id=70311Reported-by: Pierre Ossman <pierre@ossman.eu> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tejun Heo authored
commit 9f9c47f0 upstream. It's a bit odd to see a newer device showing mod15write; however, the reported behavior is highly consistent and other factors which could contribute seem to have been verified well enough. Also, both sata_sil itself and the drive are fairly outdated at this point making the risk of this change fairly low. It is possible, probably likely, that other drive models in the same family have the same problem; however, for now, let's just add the specific model which was tested. Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: matson <lists-matsonpa@luxsci.me> References: http://lkml.kernel.org/g/201401211912.s0LJCk7F015058@rs103.luxsci.comSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Denis V. Lunev authored
commit efb9e0f4 upstream. Without the patch the kernel generates the following error. ata11.15: SATA link up 1.5 Gbps (SStatus 113 SControl 310) ata11.15: Port Multiplier vendor mismatch '0x197b' != '0x123' ata11.15: PMP revalidation failed (errno=-19) ata11.15: failed to recover PMP after 5 tries, giving up This patch helps to bypass this error and the device becomes functional. Signed-off-by: Denis V. Lunev <den@openvz.org> Signed-off-by: Tejun Heo <tj@kernel.org> Cc: <linux-ide@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Peter Zijlstra authored
commit 26e61e89 upstream. Vince "Super Tester" Weaver reported a new round of syscall fuzzing (Trinity) failures, with perf WARN_ON()s triggering. He also provided traces of the failures. This is I think the relevant bit: > pec_1076_warn-2804 [000] d... 147.926153: x86_pmu_disable: x86_pmu_disable > pec_1076_warn-2804 [000] d... 147.926153: x86_pmu_state: Events: { > pec_1076_warn-2804 [000] d... 147.926156: x86_pmu_state: 0: state: .R config: ffffffffffffffff ( (null)) > pec_1076_warn-2804 [000] d... 147.926158: x86_pmu_state: 33: state: AR config: 0 (ffff88011ac99800) > pec_1076_warn-2804 [000] d... 147.926159: x86_pmu_state: } > pec_1076_warn-2804 [000] d... 147.926160: x86_pmu_state: n_events: 1, n_added: 0, n_txn: 1 > pec_1076_warn-2804 [000] d... 147.926161: x86_pmu_state: Assignment: { > pec_1076_warn-2804 [000] d... 147.926162: x86_pmu_state: 0->33 tag: 1 config: 0 (ffff88011ac99800) > pec_1076_warn-2804 [000] d... 147.926163: x86_pmu_state: } > pec_1076_warn-2804 [000] d... 147.926166: collect_events: Adding event: 1 (ffff880119ec8800) So we add the insn:p event (fd[23]). At this point we should have: n_events = 2, n_added = 1, n_txn = 1 > pec_1076_warn-2804 [000] d... 147.926170: collect_events: Adding event: 0 (ffff8800c9e01800) > pec_1076_warn-2804 [000] d... 147.926172: collect_events: Adding event: 4 (ffff8800cbab2c00) We try and add the {BP,cycles,br_insn} group (fd[3], fd[4], fd[15]). These events are 0:cycles and 4:br_insn, the BP event isn't x86_pmu so that's not visible. group_sched_in() pmu->start_txn() /* nop - BP pmu */ event_sched_in() event->pmu->add() So here we should end up with: 0: n_events = 3, n_added = 2, n_txn = 2 4: n_events = 4, n_added = 3, n_txn = 3 But seeing the below state on x86_pmu_enable(), the must have failed, because the 0 and 4 events aren't there anymore. Looking at group_sched_in(), since the BP is the leader, its event_sched_in() must have succeeded, for otherwise we would not have seen the sibling adds. But since neither 0 or 4 are in the below state; their event_sched_in() must have failed; but I don't see why, the complete state: 0,0,1:p,4 fits perfectly fine on a core2. However, since we try and schedule 4 it means the 0 event must have succeeded! Therefore the 4 event must have failed, its failure will have put group_sched_in() into the fail path, which will call: event_sched_out() event->pmu->del() on 0 and the BP event. Now x86_pmu_del() will reduce n_events; but it will not reduce n_added; giving what we see below: n_event = 2, n_added = 2, n_txn = 2 > pec_1076_warn-2804 [000] d... 147.926177: x86_pmu_enable: x86_pmu_enable > pec_1076_warn-2804 [000] d... 147.926177: x86_pmu_state: Events: { > pec_1076_warn-2804 [000] d... 147.926179: x86_pmu_state: 0: state: .R config: ffffffffffffffff ( (null)) > pec_1076_warn-2804 [000] d... 147.926181: x86_pmu_state: 33: state: AR config: 0 (ffff88011ac99800) > pec_1076_warn-2804 [000] d... 147.926182: x86_pmu_state: } > pec_1076_warn-2804 [000] d... 147.926184: x86_pmu_state: n_events: 2, n_added: 2, n_txn: 2 > pec_1076_warn-2804 [000] d... 147.926184: x86_pmu_state: Assignment: { > pec_1076_warn-2804 [000] d... 147.926186: x86_pmu_state: 0->33 tag: 1 config: 0 (ffff88011ac99800) > pec_1076_warn-2804 [000] d... 147.926188: x86_pmu_state: 1->0 tag: 1 config: 1 (ffff880119ec8800) > pec_1076_warn-2804 [000] d... 147.926188: x86_pmu_state: } > pec_1076_warn-2804 [000] d... 147.926190: x86_pmu_enable: S0: hwc->idx: 33, hwc->last_cpu: 0, hwc->last_tag: 1 hwc->state: 0 So the problem is that x86_pmu_del(), when called from a group_sched_in() that fails (for whatever reason), and without x86_pmu TXN support (because the leader is !x86_pmu), will corrupt the n_added state. Reported-and-Tested-by: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Stephane Eranian <eranian@google.com> Cc: Dave Jones <davej@redhat.com> Link: http://lkml.kernel.org/r/20140221150312.GF3104@twins.programming.kicks-ass.netSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Marek Szyprowski authored
commit c091c71a upstream. GFP_ATOMIC is not a single gfp flag, but a macro which expands to the other flags, where meaningful is the LACK of __GFP_WAIT flag. To check if caller wants to perform an atomic allocation, the code must test for a lack of the __GFP_WAIT flag. This patch fixes the issue introduced in v3.5-rc1. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Levente Kurusa authored
commit 67809f85 upstream. Samsung's pci-e SSDs with device ID 0x1600 which are found on some macbooks time out on NCQ commands. Blacklist NCQ on the device so that the affected machines can at least boot. Original-patch-by: Levente Kurusa <levex@linux.com> Signed-off-by: Tejun Heo <tj@kernel.org> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=60731Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Laurent Dufour authored
commit f5295bd8 upstream. In copy_oldmem_page, the current check using max_pfn and min_low_pfn to decide if the page is backed or not, is not valid when the memory layout is not continuous. This happens when running as a QEMU/KVM guest, where RTAS is mapped higher in the memory. In that case max_pfn points to the end of RTAS, and a hole between the end of the kdump kernel and RTAS is not backed by PTEs. As a consequence, the kdump kernel is crashing in copy_oldmem_page when accessing in a direct way the pages in that hole. This fix relies on the memblock's service memblock_is_region_memory to check if the read page is part or not of the directly accessible memory. Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com> Tested-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tony Breeds authored
commit 41dd03a9 upstream. Currently we're storing a host endian RTAS token in rtas_stop_self_args.token. We then pass that directly to rtas. This is fine on big endian however on little endian the token is not what we expect. This will typically result in hitting: panic("Alas, I survived.\n"); To fix this we always use the stop-self token in host order and always convert it to be32 before passing this to rtas. Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-