- 05 Mar, 2014 40 commits
-
-
Alex Deucher authored
commit d7eb0a09 upstream. Properly clear the enable bit when audio disable is requested. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Mike Snitzer authored
commit 1acacc07 upstream. dm_pool_close_thin_device() must be called if dm_set_target_max_io_len() fails in thin_ctr(). Otherwise __pool_destroy() will fail because the pool will still have an open thin device: device-mapper: thin metadata: attempt to close pmd when 1 device(s) are still open device-mapper: thin: __pool_destroy: dm_pool_metadata_close() failed. Also, must establish error code if failing thin_ctr() because the pool is in fail_io mode. Signed-off-by: Mike Snitzer <snitzer@redhat.com> Acked-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Mike Snitzer authored
commit 4d1662a3 upstream. Commit 905e51b3 ("dm thin: commit outstanding data every second") introduced a periodic commit. This commit occurs regardless of whether any thin devices have made changes. Fix the periodic commit to check if any of a pool's thin devices have changed using dm_pool_changed_this_transaction(). Reported-by: Alexander Larsson <alexl@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Acked-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Hannes Reinecke authored
commit a1989b33 upstream. An invalid ioctl will never be valid, irrespective of whether multipath has active paths or not. So for invalid ioctls we do not have to wait for multipath to activate any paths, but can rather return an error code immediately. This fix resolves numerous instances of: udevd[]: worker [] unexpectedly returned with status 0x0100 that have been seen during testing. Signed-off-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Linus Walleij authored
commit e9baa9d9 upstream. It appears that in the DMA40 driver the DMA tasklet will very often dereference memory for a descriptor just free:d from the DMA40 slab. Nothing happens because no other part of the driver has yet had a chance to claim this memory, but it's really nasty to dereference free:d memory, so let's check the flag before the descriptor is free and store it in a bool variable. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Sebastian Capella authored
commit f8d5b9e9 upstream. During restore, pm_notifier chain are called with PM_RESTORE_PREPARE. The firmware_class driver handler fw_pm_notify does not have a handler for this. As a result, it keeps a reader on the kmod.c umhelper_sem. During freeze_processes, the call to __usermodehelper_disable tries to take a write lock on this semaphore and hangs waiting. Signed-off-by: Sebastian Capella <sebastian.capella@linaro.org> Acked-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Jean Delvare authored
commit 75135da0 upstream. pci_get_device() decrements the reference count of "from" (last argument) so when we break off the loop successfully we have only one device reference - and we don't know which device we have. If we want a reference to each device, we must take them explicitly and let the pci_get_device() walk complete to avoid duplicate references. This is serious, as over-putting device references will cause the device to eventually disappear. Without this fix, the kernel crashes after a few insmod/rmmod cycles. Tested on an Intel S7000FC4UR system with a 7300 chipset. Signed-off-by: Jean Delvare <jdelvare@suse.de> Link: http://lkml.kernel.org/r/20140224111656.09bbb7ed@endymion.delvare Cc: Mauro Carvalho Chehab <m.chehab@samsung.com> Cc: Doug Thompson <dougthompson@xmission.com> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Dr. Greg Wettstein authored
commit 6f58c780 upstream. A selective retransmission request (SRR) is a fibre-channel protocol control request which provides support for requesting retransmission of a data sequence in response to an issue such as frame loss or corruption. These events are experienced infrequently in fibre-channel based networks which makes it difficult to test and assess codepaths which handle these events. We were fortunate enough, for some definition of fortunate, to have a metro-area single-mode SAN link which, at 10 GBPS sustained load levels, would consistently generate SRR's in a SCST based target implementation using our SCST/in-kernel Qlogic target interface driver. In response to an SRR the in-kernel Qlogic target driver immediately panics resulting in a catastrophic storage failure for serviced initiators. The culprit was a debug statement in the qla_target.c file which does not verify that a pointer to the SCSI CDB is not null. The unchecked pointer dereference results in the kernel panic and resultant system failure. The other two references to the SCSI CDB by the SRR handling code use a ternary operator to verify a non-null pointer is being acted on. This patch simply adds a similar test to the implicated debug statement. This patch is a candidate for any stable kernel being maintained since it addresses a potentially catastrophic event with minimal downside. Signed-off-by: Dr. Greg Wettstein <greg@enjellic.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Olof Johansson authored
commit e306dfd0 upstream. The frame PC value in the unwind code used to just take the saved LR value and use that. That's incorrect as a stack trace, since it shows the return path stack, not the call path stack. In particular, it shows faulty information in case the bl is done as the very last instruction of one label, since the return point will be in the next label. That can easily be seen with tail calls to panic(), which is marked __noreturn and thus doesn't have anything useful after it. Easiest here is to just correct the unwind code and do a -4, to get the actual call site for the backtrace instead of the return site. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
James Hogan authored
commit f229006e upstream. Fix irq_set_affinity callbacks in the Meta IRQ chip drivers to AND cpu_online_mask into the cpumask when picking a CPU to vector the interrupt to. As Thomas pointed out, the /proc/irq/$N/smp_affinity interface doesn't filter out offline CPUs, so without this patch if you offline CPU0 and set an IRQ affinity to 0x3 it vectors the interrupt onto CPU0 even though it is offline. Reported-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-metag@vger.kernel.org Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Kirill A. Shutemov authored
commit 9845cbbd upstream. Masayoshi Mizuma reported a bug with the hang of an application under the memcg limit. It happens on write-protection fault to huge zero page If we successfully allocate a huge page to replace zero page but hit the memcg limit we need to split the zero page with split_huge_page_pmd() and fallback to small pages. The other part of the problem is that VM_FAULT_OOM has special meaning in do_huge_pmd_wp_page() context. __handle_mm_fault() expects the page to be split if it sees VM_FAULT_OOM and it will will retry page fault handling. This causes an infinite loop if the page was not split. do_huge_pmd_wp_zero_page_fallback() can return VM_FAULT_OOM if it failed to allocate one small page, so fallback to small pages will not help. The solution for this part is to replace VM_FAULT_OOM with VM_FAULT_FALLBACK is fallback required. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com> Reviewed-by: Michal Hocko <mhocko@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Charles Keepax authored
commit c4204960 upstream. snd_soc_dapm_sync takes the dapm_mutex internally, but we currently take it externally as well. This patch fixes this. Signed-off-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@linaro.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Davidlohr Bueso authored
commit f3713fd9 upstream. Commit 93e6f119 ("ipc/mqueue: cleanup definition names and locations") added global hardcoded limits to the amount of message queues that can be created. While these limits are per-namespace, reality is that it ends up breaking userspace applications. Historically users have, at least in theory, been able to create up to INT_MAX queues, and limiting it to just 1024 is way too low and dramatic for some workloads and use cases. For instance, Madars reports: "This update imposes bad limits on our multi-process application. As our app uses approaches that each process opens its own set of queues (usually something about 3-5 queues per process). In some scenarios we might run up to 3000 processes or more (which of-course for linux is not a problem). Thus we might need up to 9000 queues or more. All processes run under one user." Other affected users can be found in launchpad bug #1155695: https://bugs.launchpad.net/ubuntu/+source/manpages/+bug/1155695 Instead of increasing this limit, revert it entirely and fallback to the original way of dealing queue limits -- where once a user's resource limit is reached, and all memory is used, new queues cannot be created. Signed-off-by: Davidlohr Bueso <davidlohr@hp.com> Reported-by: Madars Vitolins <m@silodev.com> Acked-by: Doug Ledford <dledford@redhat.com> Cc: Manfred Spraul <manfred@colorfullife.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Jan Kara authored
commit 1362f4ea upstream. Currently last dqput() can race with dquot_scan_active() causing it to call callback for an already deactivated dquot. The race is as follows: CPU1 CPU2 dqput() spin_lock(&dq_list_lock); if (atomic_read(&dquot->dq_count) > 1) { - not taken if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) { spin_unlock(&dq_list_lock); ->release_dquot(dquot); if (atomic_read(&dquot->dq_count) > 1) - not taken dquot_scan_active() spin_lock(&dq_list_lock); if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) - not taken atomic_inc(&dquot->dq_count); spin_unlock(&dq_list_lock); - proceeds to release dquot ret = fn(dquot, priv); - called for inactive dquot Fix the problem by making sure possible ->release_dquot() is finished by the time we call the callback and new calls to it will notice reference dquot_scan_active() has taken and bail out. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Dan Williams authored
commit da87ca4d upstream. Since commit 77873803 "net_dma: mark broken" we no longer pin dma engines active for the network-receive-offload use case. As a result the ->free_chan_resources() that occurs after the driver self test no longer has a NET_DMA induced ->alloc_chan_resources() to back it up. A late firing irq can lead to ksoftirqd spinning indefinitely due to the tasklet_disable() performed by ->free_chan_resources(). Only ->alloc_chan_resources() can clear this condition in affected kernels. This problem has been present since commit 3e037454 "I/OAT: Add support for MSI and MSI-X" in 2.6.24, but is now exposed. Given the NET_DMA use case is deprecated we can revisit moving the driver to use threaded irqs. For now, just tear down the irq and tasklet properly by: 1/ Disable the irq from triggering the tasklet 2/ Disable the irq from re-arming 3/ Flush inflight interrupts 4/ Flush the timer 5/ Flush inflight tasklets References: https://lkml.org/lkml/2014/1/27/282 https://lkml.org/lkml/2014/2/19/672 Cc: Ingo Molnar <mingo@elte.hu> Cc: Steven Rostedt <rostedt@goodmis.org> Reported-by: Mike Galbraith <bitbucket@online.de> Reported-by: Stanislav Fomichev <stfomichev@yandex-team.ru> Tested-by: Mike Galbraith <bitbucket@online.de> Tested-by: Stanislav Fomichev <stfomichev@yandex-team.ru> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Eric Paris authored
commit 9085a642 upstream. When writing policy via /sys/fs/selinux/policy I wrote the type and class of filename trans rules in CPU endian instead of little endian. On x86_64 this works just fine, but it means that on big endian arch's like ppc64 and s390 userspace reads the policy and converts it from le32_to_cpu. So the values are all screwed up. Write the values in le format like it should have been to start. Signed-off-by: Eric Paris <eparis@redhat.com> Acked-by: Stephen Smalley <sds@tycho.nsa.gov> Signed-off-by: Paul Moore <pmoore@redhat.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Max Filippov authored
commit e2fd1374 upstream. Most in-kernel users want registers spilled on the kernel stack and don't require PS.EXCM to be set. That means that they don't need fixup routine and could reuse regular window overflow mechanism for that, which makes spill routine very simple. Suggested-by: Chris Zankel <chris@zankel.net> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Max Filippov authored
commit 3251f1e2 upstream. We need it saved because it contains a3 where we track which register windows we still need to spill, and fixup handler may call C exception handlers. Also fix comments. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Andrew Lunn authored
commit d86e9af6 upstream. Enabling SPARSE_IRQ shows up a bug in the irq-orion bridge interrupt handler. The bridge interrupt is implemented using a single generic chip. Thus the parameter passed to irq_get_domain_generic_chip() should always be zero. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Fixes: 9dbd90f1 ("irqchip: Add support for Marvell Orion SoCs") Signed-off-by: Jason Cooper <jason@lakedaemon.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Sebastian Hesselbarth authored
commit e0318ec3 upstream. Bridge IRQ_CAUSE bits are asserted regardless of the corresponding bit in IRQ_MASK register. To avoid interrupt events on stale irqs, we have to clear them before unmask. This installs an .irq_startup callback to ensure stale irqs are cleared before initial unmask. Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Tested-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com> Signed-off-by: Jason Cooper <jason@lakedaemon.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Sebastian Hesselbarth authored
commit 5f40067f upstream. Bridge irqs are edge-triggered, i.e. they get asserted on low-to-high transitions and not on the level of the downstream interrupt line. This replaces handle_level_irq by the more appropriate handle_edge_irq. Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Tested-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com> Signed-off-by: Jason Cooper <jason@lakedaemon.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Sebastian Hesselbarth authored
commit 7b119fd1 upstream. It is good practice to mask and clear pending irqs on init. We already mask all irqs, so also clear the bridge irq cause register. Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Tested-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com> Signed-off-by: Jason Cooper <jason@lakedaemon.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Takashi Iwai authored
commit 37c367ec upstream. HP Folio 13 may have a broken BIOS that doesn't set up the mute LED GPIO properly, and the driver guesses it wrongly, too. Add a new fixup entry for setting the GPIO pin statically for this laptop. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=70991Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Peter Zijlstra authored
commit e3703f8c upstream. Drew Richardson reported that he could make the kernel go *boom* when hotplugging while having perf events active. It turned out that when you have a group event, the code in __perf_event_exit_context() fails to remove the group siblings from the context. We then proceed with destroying and freeing the event, and when you re-plug the CPU and try and add another event to that CPU, things go *boom* because you've still got dead entries there. Reported-by: Drew Richardson <drew.richardson@arm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/n/tip-k6v5wundvusvcseqj1si0oz0@git.kernel.orgSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Will Deacon authored
commit 57ca90f6 upstream. Whilst trying to bring-up an SMMUv2 implementation with the table walker plumbed into a coherent interconnect, I noticed that the memory transactions targetting the CPU caches from the SMMU were marked as outer-shareable instead of inner-shareable. After a bunch of digging, it seems that we actually need to program CBARn.BPSHCFG for s1-s2-bypass contexts to act as non-shareable in order for the shareability configured in the corresponding TTBCR not to be overridden with an outer-shareable attribute. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Will Deacon authored
commit c9d09e27 upstream. Commit a44a9791 ("iommu/arm-smmu: use mutex instead of spinlock for locking page tables") replaced the page table spinlock with a mutex, to allow blocking allocations to satisfy lazy mapping requests. Unfortunately, it turns out that IOMMU mappings are created from atomic context (e.g. spinlock held during a dma_map), so this change doesn't really help us in practice. This patch is a partial revert of the offending commit, bringing back the original spinlock but replacing our page table allocations for any levels below the pgd (which is allocated during domain init) with GFP_ATOMIC instead of GFP_KERNEL. Reported-by: Andreas Herrmann <andreas.herrmann@calxeda.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Yifan Zhang authored
commit 97a64420 upstream. The ARM SMMU driver's population of puds and pmds is broken, since we iterate over the next level of table repeatedly setting the current level descriptor to point at the pmd being initialised. This is clearly wrong when dealing with multiple pmds/puds. This patch fixes the problem by moving the pud/pmd population out of the loop and instead performing it when we allocate the next level (like we correctly do for ptes already). The starting address for the next level is then calculated prior to entering the loop. Signed-off-by: Yifan Zhang <zhangyf@marvell.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Denis CIOCCA authored
commit a0657716 upstream. The driver was not able to manage the sensor: during probe function and wai check, the driver stops and writes: "device name and WhoAmI mismatch." The correct value of L3GD20H wai is 0xd7 instead of 0xd4. Dropped support for the sensor. Signed-off-by: Denis Ciocca <denis.ciocca@st.com> Signed-off-by: Jonathan Cameron <jic23@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Manu Gupta authored
commit 260ea9c2 upstream. The D-Link DWA-123 REV D1 with USB ID 2001:3310 uses this driver. Signed-off-by: Manu Gupta <manugupt1@gmail.com> Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Arve Hjønnevåg authored
commit e194fd8a upstream. The change (008fa749) that moved the node release code to a separate function broke death notifications in some cases. When it encountered a reference without a death notification request, it would skip looking at the remaining references, and therefore fail to send death notifications for them. Cc: Colin Cross <ccross@android.com> Cc: Android Kernel Team <kernel-team@android.com> Signed-off-by: Arve Hjønnevåg <arve@android.com> Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Jeremy Compostella <jeremy.compostella@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Steve Twiss authored
commit ebf6dad0 upstream. Bug fix to allow the setting of maximum voltage for certain LDOs. What the bug is: There is a problem caused by an invalid calculation of n_voltages in the driver. This n_voltages value has the potential to be different for each regulator. The value for linear_min_sel is set as DA9063_V##regl_name# which can be different depending upon the regulator. This is chosen according to the following definitions in the DA9063 registers.h file: DA9063_VLDO1_BIAS 0 DA9063_VLDO2_BIAS 0 DA9063_VLDO3_BIAS 0 DA9063_VLDO4_BIAS 0 DA9063_VLDO5_BIAS 2 DA9063_VLDO6_BIAS 2 DA9063_VLDO7_BIAS 2 DA9063_VLDO8_BIAS 2 DA9063_VLDO9_BIAS 3 DA9063_VLDO10_BIAS 2 DA9063_VLDO11_BIAS 2 The calculation for n_voltages is valid for LDOs whose BIAS value is zero but this is not correct for those LDOs which have a non-zero value. What the fix is: In order to take into account the non-zero linear_min_sel value which is set for the regulators LDO5, LDO6, LDO7, LDO8, LDO9, LDO10 and LDO11, the calculation for n_voltages should take into account the missing term defined by DA9063_V##regl_name#. This will in turn allow the core constraints calculation to set the maximum voltage limits correctly and therefore allow users to apply the maximum expected voltage to all of the LDOs. Signed-off-by: Steve Twiss <stwiss.opensource@diasemi.com> Signed-off-by: Mark Brown <broonie@linaro.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Lai Jiangshan authored
commit 5bdfff96 upstream. When a kworker should die, the kworkre is notified through WORKER_DIE flag instead of kthread_should_stop(). This, IIRC, is primarily to keep the test synchronized inside worker_pool lock. WORKER_DIE is first set while holding pool->lock, the lock is dropped and kthread_stop() is called. Unfortunately, this means that there's a slight chance that the target kworker may see WORKER_DIE before kthread_stop() finishes and exits and frees the target task before or during kthread_stop(). Fix it by pinning the target task before setting WORKER_DIE and putting it after kthread_stop() is done. tj: Improved patch description and comment. Moved pinning above WORKER_DIE for better signify what it's protecting. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Guenter Roeck authored
commit 500a9157 upstream. When trying to set the minimum temperature, the driver was erroneously writing the maximum temperature into the chip. Signed-off-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Jean Delvare <jdelvare@suse.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Chao Bi authored
commit accb884b upstream. In mei_cl_read_start(), if it fails to send flow control request, it will release "cl->read_cb" but forget to set pointer to NULL, leaving "cl->read_cb" still pointing to random memory, next time this client is operated like mei_release(), it has chance to refer to this wrong pointer. Fixes: PANIC at kfree in mei_release() [228781.826904] Call Trace: [228781.829737] [<c16249b8>] ? mei_cl_unlink+0x48/0xa0 [228781.835283] [<c1624487>] mei_io_cb_free+0x17/0x30 [228781.840733] [<c16265d8>] mei_release+0xa8/0x180 [228781.845989] [<c135c610>] ? __fsnotify_parent+0xa0/0xf0 [228781.851925] [<c1325a69>] __fput+0xd9/0x200 [228781.856696] [<c1325b9d>] ____fput+0xd/0x10 [228781.861467] [<c125cae1>] task_work_run+0x81/0xb0 [228781.866821] [<c1242e53>] do_exit+0x283/0xa00 [228781.871786] [<c1a82b36>] ? kprobe_flush_task+0x66/0xc0 [228781.877722] [<c124eeb8>] ? __dequeue_signal+0x18/0x1a0 [228781.883657] [<c124f072>] ? dequeue_signal+0x32/0x190 [228781.889397] [<c1243744>] do_group_exit+0x34/0xa0 [228781.894750] [<c12517b6>] get_signal_to_deliver+0x206/0x610 [228781.901075] [<c12018d8>] do_signal+0x38/0x100 [228781.906136] [<c1626d1c>] ? mei_read+0x42c/0x4e0 [228781.911393] [<c12600a0>] ? wake_up_bit+0x30/0x30 [228781.916745] [<c16268f0>] ? mei_poll+0x120/0x120 [228781.922001] [<c1324be9>] ? vfs_read+0x89/0x160 [228781.927158] [<c16268f0>] ? mei_poll+0x120/0x120 [228781.932414] [<c133ca34>] ? fget_light+0x44/0xe0 [228781.937670] [<c1324e58>] ? SyS_read+0x68/0x80 [228781.942730] [<c12019f5>] do_notify_resume+0x55/0x70 [228781.948376] [<c1a7de5d>] work_notifysig+0x29/0x30 [228781.953827] [<c1a70000>] ? bad_area+0x5/0x3e Signed-off-by: Chao Bi <chao.bi@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Joerg Dorchain authored
commit 6dbd46c8 upstream. Hello, the following patch adds an entry for the PID of a Cressi Leonardo diving computer interface to kernel 3.13.0. It is detected as FT232RL. Works with subsurface. Signed-off-by: Joerg Dorchain <joerg@dorchain.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Stanislaw Gruszka authored
commit a1227f3c upstream. ehci_irq() and ehci_hrtimer_func() can deadlock on ehci->lock when threadirqs option is used. To prevent the deadlock use spin_lock_irqsave() in ehci_irq(). This change can be reverted when hrtimer callbacks become threaded. Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Acked-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Alan Stern authored
commit 3e8d6d85 upstream. High-speed USB connections revert back to full-speed signalling when the device goes into suspend. This takes several milliseconds, and during that time it's not possible to tell reliably whether the device has been disconnected. On some platforms, the Wake-On-Disconnect circuitry gets confused during this intermediate state. It generates a false wakeup signal, which can prevent the controller from going to sleep. To avoid this problem, this patch adds a 5-ms delay to the ehci_bus_suspend() routine if any ports have to switch over to full-speed signalling. (Actually, the delay was already present for devices using a particular kind of PHY power management; the patch merely causes the delay to be used more widely.) Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Reviewed-by: Peter Chen <Peter.Chen@freescale.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Aleksander Morgado authored
commit 12df84d4 upstream. This interface is to be handled by the qmi_wwan driver. CC: Hans-Christoph Schemmel <hans-christoph.schemmel@gemalto.com> CC: Christian Schmiedl <christian.schmiedl@gemalto.com> CC: Nicolaus Colberg <nicolaus.colberg@gemalto.com> CC: David McCullough <david.mccullough@accelecon.com> Signed-off-by: Aleksander Morgado <aleksander@aleksander.es> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Florian Fainelli authored
commit 2d1f7af3 upstream. Commit 3dc6475c ("bcm63xx_enet: add support Broadcom BCM6345 Ethernet") changed the ENETDMA[CS] macros such that they are no longer macros, but actual register offset definitions. The bcm63xx_udc driver was not updated, and as a result, causes the following build error to pop up: CC drivers/usb/gadget/u_ether.o drivers/usb/gadget/bcm63xx_udc.c: In function 'iudma_write': drivers/usb/gadget/bcm63xx_udc.c:642:24: error: called object '0' is not a function drivers/usb/gadget/bcm63xx_udc.c: In function 'iudma_reset_channel': drivers/usb/gadget/bcm63xx_udc.c:698:46: error: called object '0' is not a function drivers/usb/gadget/bcm63xx_udc.c:700:49: error: called object '0' is not a function Fix this by updating usb_dmac_{read,write}l and usb_dmas_{read,write}l to take an extra channel argument, and use the channel width (ENETDMA_CHAN_WIDTH) to offset the register we want to access, hence doing again what the macro implicitely did for us. Cc: Kevin Cernekee <cernekee@gmail.com> Cc: Jonas Gorski <jogo@openwrt.org> Signed-off-by: Florian Fainelli <florian@openwrt.org> Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Matthieu CASTET authored
commit 5bf5dbed upstream. ENDPTFLUSH and ENDPTPRIME registers are set by software and clear by hardware. There is a bit for each endpoint. When we are setting a bit for an endpoint we should make sure we do not touch other endpoint bit. There is a race condition if the hardware clear the bit between the read and the write in hw_write. Signed-off-by: Peter Chen <peter.chen@freescale.com> Signed-off-by: Matthieu CASTET <matthieu.castet@parrot.com> Tested-by: Michael Grzeschik <mgrzeschik@pengutronix.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-