- 24 Mar, 2014 8 commits
-
-
Alexandre Bounine authored
commit 04379dff upstream. This patch is a modification of the patch originally proposed by Xiaotian Feng <xtfeng@gmail.com>: https://lkml.org/lkml/2012/11/5/413 This new version disables DMA channel interrupts and ensures that the tasklet wil not be scheduled again before calling tasklet_kill(). Unfortunately the updated patch was not released at that time due to planned rework of Tsi721 mport driver to use threaded interrupts (which has yet to happen). Recently the issue was reported again: https://lkml.org/lkml/2014/2/19/762. Description from the original Xiaotian's patch: "Some drivers use tasklet_disable in device remove/release process, tasklet_disable will inc tasklet->count and return. If the tasklet is not handled yet under some softirq pressure, the tasklet will be placed on the tasklet_vec, never have a chance to be excuted. This might lead to a heavy loaded ksoftirqd, wakeup with pending_softirq, but tasklet is disabled. tasklet_kill should be used in this case." This patch is applicable to kernel versions starting from v3.5. Signed-off-by:
Alexandre Bounine <alexandre.bounine@idt.com> Cc: Matt Porter <mporter@kernel.crashing.org> Cc: Xiaotian Feng <xtfeng@gmail.com> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> Cc: Mike Galbraith <bitbucket@online.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
George McCollister authored
commit 791c9e02 upstream. dequeue_entity() is called when p->on_rq and sets se->on_rq = 0 which appears to guarentee that the !se->on_rq condition is met. If the task has done set_current_state(TASK_INTERRUPTIBLE) without schedule() the second condition will be met and vruntime will be incorrectly adjusted twice. In certain cases this can result in the task's vruntime never increasing past the vruntime of other tasks on the CFS' run queue, starving them of CPU time. This patch changes switched_from_fair() to use !p->on_rq instead of !se->on_rq. I'm able to cause a task with a priority of 120 to starve all other tasks with the same priority on an ARM platform running 3.2.51-rt72 PREEMPT RT by writing one character at time to a serial tty (16550 UART) in a tight loop. I'm also able to verify making this change corrects the problem on that platform and kernel version. Signed-off-by:
George McCollister <george.mccollister@gmail.com> Signed-off-by:
Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1392767811-28916-1-git-send-email-george.mccollister@gmail.comSigned-off-by:
Ingo Molnar <mingo@kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hugh Dickins authored
commit ce48225f upstream. Commit 0eef6156 ("memcg: fix css reference leak and endless loop in mem_cgroup_iter") got the interaction with the commit a few before it d8ad3055 ("mm/memcg: iteration skip memcgs not yet fully initialized") slightly wrong, and we didn't notice at the time. It's elusive, and harder to get than the original, but for a couple of days before rc1, I several times saw a endless loop similar to that supposedly being fixed. This time it was a tighter loop in __mem_cgroup_iter_next(): because we can get here when our root has already been offlined, and the ordering of conditions was such that we then just cycled around forever. Fixes: 0eef6156 ("memcg: fix css reference leak and endless loop in mem_cgroup_iter"). Signed-off-by:
Hugh Dickins <hughd@google.com> Acked-by:
Michal Hocko <mhocko@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Greg Thelen <gthelen@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Al Viro authored
commit 1b56e989 upstream. Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Kara authored
commit 15c34a76 upstream. Global quota files are accessed from different nodes. Thus we cannot cache offset of quota structure in the quota file after we drop our node reference count to it because after that moment quota structure may be freed and reallocated elsewhere by a different node resulting in corruption of quota file. Fix the problem by clearing dq_off when we are releasing dquot structure. We also remove the DB_READ_B handling because it is useless - DQ_ACTIVE_B is set iff DQ_READ_B is set. Signed-off-by:
Jan Kara <jack@suse.cz> Cc: Goldwyn Rodrigues <rgoldwyn@suse.de> Cc: Joel Becker <jlbec@evilplan.org> Reviewed-by:
Mark Fasheh <mfasheh@suse.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vlastimil Babka authored
commit 9050d7eb upstream. Daniel Borkmann reported a VM_BUG_ON assertion failing: ------------[ cut here ]------------ kernel BUG at mm/mlock.c:528! invalid opcode: 0000 [#1] SMP Modules linked in: ccm arc4 iwldvm [...] video CPU: 3 PID: 2266 Comm: netsniff-ng Not tainted 3.14.0-rc2+ #8 Hardware name: LENOVO 2429BP3/2429BP3, BIOS G4ET37WW (1.12 ) 05/29/2012 task: ffff8801f87f9820 ti: ffff88002cb44000 task.ti: ffff88002cb44000 RIP: 0010:[<ffffffff81171ad0>] [<ffffffff81171ad0>] munlock_vma_pages_range+0x2e0/0x2f0 Call Trace: do_munmap+0x18f/0x3b0 vm_munmap+0x41/0x60 SyS_munmap+0x22/0x30 system_call_fastpath+0x1a/0x1f RIP munlock_vma_pages_range+0x2e0/0x2f0 ---[ end trace a0088dcf07ae10f2 ]--- because munlock_vma_pages_range() thinks it's unexpectedly in the middle of a THP page. This can be reproduced with default config since 3.11 kernels. A reproducer can be found in the kernel's selftest directory for networking by running ./psock_tpacket. The problem is that an order=2 compound page (allocated by alloc_one_pg_vec_page() is part of the munlocked VM_MIXEDMAP vma (mapped by packet_mmap()) and mistaken for a THP page and assumed to be order=9. The checks for THP in munlock came with commit ff6a6da6 ("mm: accelerate munlock() treatment of THP pages"), i.e. since 3.9, but did not trigger a bug. It just makes munlock_vma_pages_range() skip such compound pages until the next 512-pages-aligned page, when it encounters a head page. This is however not a problem for vma's where mlocking has no effect anyway, but it can distort the accounting. Since commit 7225522b ("mm: munlock: batch non-THP page isolation and munlock+putback using pagevec") this can trigger a VM_BUG_ON in PageTransHuge() check. This patch fixes the issue by adding VM_MIXEDMAP flag to VM_SPECIAL, a list of flags that make vma's non-mlockable and non-mergeable. The reasoning is that VM_MIXEDMAP vma's are similar to VM_PFNMAP, which is already on the VM_SPECIAL list, and both are intended for non-LRU pages where mlocking makes no sense anyway. Related Lkml discussion can be found in [2]. [1] tools/testing/selftests/net/psock_tpacket [2] https://lkml.org/lkml/2014/1/10/427Signed-off-by:
Vlastimil Babka <vbabka@suse.cz> Signed-off-by:
Daniel Borkmann <dborkman@redhat.com> Reported-by:
Daniel Borkmann <dborkman@redhat.com> Tested-by:
Daniel Borkmann <dborkman@redhat.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Cc: John David Anglin <dave.anglin@bell.net> Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Cc: Konstantin Khlebnikov <khlebnikov@openvz.org> Cc: Carsten Otte <cotte@de.ibm.com> Cc: Jared Hulbert <jaredeh@gmail.com> Tested-by:
Hannes Frederic Sowa <hannes@stressinduktion.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by:
Rik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johannes Weiner authored
commit 27329369 upstream. Jan Stancek reports manual page migration encountering allocation failures after some pages when there is still plenty of memory free, and bisected the problem down to commit 81c0a2bb ("mm: page_alloc: fair zone allocator policy"). The problem is that GFP_THISNODE obeys the zone fairness allocation batches on one hand, but doesn't reset them and wake kswapd on the other hand. After a few of those allocations, the batches are exhausted and the allocations fail. Fixing this means either having GFP_THISNODE wake up kswapd, or GFP_THISNODE not participating in zone fairness at all. The latter seems safer as an acute bugfix, we can clean up later. Reported-by:
Jan Stancek <jstancek@redhat.com> Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
Rik van Riel <riel@redhat.com> Acked-by:
Mel Gorman <mgorman@suse.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Minchan Kim authored
commit db5d711e upstream. zram_meta_alloc could fail so caller should check it. Otherwise, your system will hang. Signed-off-by:
Minchan Kim <minchan@kernel.org> Acked-by:
Jerome Marchand <jmarchan@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 07 Mar, 2014 32 commits
-
-
Greg Kroah-Hartman authored
-
Jani Nikula authored
commit f51a44b9 upstream. Retrying indefinitely places too much trust on the aux implementation of the sink devices. Reported-by:
Daniel Martin <consume.noise@gmail.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=71267Signed-off-by:
Jani Nikula <jani.nikula@intel.com> Tested-by:
Theodore Ts'o <tytso@mit.edu> Tested-by:
Sree Harsha Totakura <freedesktop@h.totakura.in> Signed-off-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jani Nikula authored
commit 04eada25 upstream. Give more slack to sink devices before retrying on native aux defer. AFAICT the 100 us timeout was not based on the DP spec. Signed-off-by:
Jani Nikula <jani.nikula@intel.com> Cc: stable@vger.kernel.org (on Jani's request) Signed-off-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jerome Glisse authored
commit d9654413 upstream. Need to free the uvd ring. Also reshuffle gart tear down to happen after uvd tear down. Signed-off-by:
Jérôme Glisse <jglisse@redhat.com> Reviewed-by:
Christian König <christian.koenig@amd.com> Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 9ef4e1d0 upstream. Causes display problems. We had already disabled sharing for non-DP displays. Based on a patch from: Niels Ole Salscheider <niels_ole@salscheider-online.de> bug: https://bugzilla.kernel.org/show_bug.cgi?id=58121Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christian König authored
commit 5e386b57 upstream. Otherwise we might get a crash here. Signed-off-by:
Christian König <christian.koenig@amd.com> Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 9f050c7f upstream. Print the supported functions mask in addition to the version. This is useful in debugging PX problems since we can see what functions are available. Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit d7eb0a09 upstream. Properly clear the enable bit when audio disable is requested. Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Reviewed-by:
Christian König <christian.koenig@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mike Snitzer authored
commit 1acacc07 upstream. dm_pool_close_thin_device() must be called if dm_set_target_max_io_len() fails in thin_ctr(). Otherwise __pool_destroy() will fail because the pool will still have an open thin device: device-mapper: thin metadata: attempt to close pmd when 1 device(s) are still open device-mapper: thin: __pool_destroy: dm_pool_metadata_close() failed. Also, must establish error code if failing thin_ctr() because the pool is in fail_io mode. Signed-off-by:
Mike Snitzer <snitzer@redhat.com> Acked-by:
Joe Thornber <ejt@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mike Snitzer authored
commit 4d1662a3 upstream. Commit 905e51b3 ("dm thin: commit outstanding data every second") introduced a periodic commit. This commit occurs regardless of whether any thin devices have made changes. Fix the periodic commit to check if any of a pool's thin devices have changed using dm_pool_changed_this_transaction(). Reported-by:
Alexander Larsson <alexl@redhat.com> Signed-off-by:
Mike Snitzer <snitzer@redhat.com> Acked-by:
Joe Thornber <ejt@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mike Snitzer authored
commit c6eda5e8 upstream. Commit c9d28d5d ("dm cache: promotion optimisation for writes") incorrectly placed the 'hook_info' member in the writethrough-only portion of the per_bio_data structure. Given that the overwrite optimization may be used for writeback the 'hook_info' member must be placed above the 'cache' member of the per_bio_data structure. Any members above 'cache' are available from both writeback and writethrough modes' per_bio_data structure. Signed-off-by:
Mike Snitzer <snitzer@redhat.com> Acked-by:
Joe Thornber <ejt@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hannes Reinecke authored
commit a1989b33 upstream. An invalid ioctl will never be valid, irrespective of whether multipath has active paths or not. So for invalid ioctls we do not have to wait for multipath to activate any paths, but can rather return an error code immediately. This fix resolves numerous instances of: udevd[]: worker [] unexpectedly returned with status 0x0100 that have been seen during testing. Signed-off-by:
Hannes Reinecke <hare@suse.de> Signed-off-by:
Mike Snitzer <snitzer@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Linus Walleij authored
commit e9baa9d9 upstream. It appears that in the DMA40 driver the DMA tasklet will very often dereference memory for a descriptor just free:d from the DMA40 slab. Nothing happens because no other part of the driver has yet had a chance to claim this memory, but it's really nasty to dereference free:d memory, so let's check the flag before the descriptor is free and store it in a bool variable. Reported-by:
Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by:
Linus Walleij <linus.walleij@linaro.org> Signed-off-by:
Vinod Koul <vinod.koul@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sebastian Capella authored
commit f8d5b9e9 upstream. During restore, pm_notifier chain are called with PM_RESTORE_PREPARE. The firmware_class driver handler fw_pm_notify does not have a handler for this. As a result, it keeps a reader on the kmod.c umhelper_sem. During freeze_processes, the call to __usermodehelper_disable tries to take a write lock on this semaphore and hangs waiting. Signed-off-by:
Sebastian Capella <sebastian.capella@linaro.org> Acked-by:
Ming Lei <ming.lei@canonical.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jean Delvare authored
commit 75135da0 upstream. pci_get_device() decrements the reference count of "from" (last argument) so when we break off the loop successfully we have only one device reference - and we don't know which device we have. If we want a reference to each device, we must take them explicitly and let the pci_get_device() walk complete to avoid duplicate references. This is serious, as over-putting device references will cause the device to eventually disappear. Without this fix, the kernel crashes after a few insmod/rmmod cycles. Tested on an Intel S7000FC4UR system with a 7300 chipset. Signed-off-by:
Jean Delvare <jdelvare@suse.de> Link: http://lkml.kernel.org/r/20140224111656.09bbb7ed@endymion.delvare Cc: Mauro Carvalho Chehab <m.chehab@samsung.com> Cc: Doug Thompson <dougthompson@xmission.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dr. Greg Wettstein authored
commit 6f58c780 upstream. A selective retransmission request (SRR) is a fibre-channel protocol control request which provides support for requesting retransmission of a data sequence in response to an issue such as frame loss or corruption. These events are experienced infrequently in fibre-channel based networks which makes it difficult to test and assess codepaths which handle these events. We were fortunate enough, for some definition of fortunate, to have a metro-area single-mode SAN link which, at 10 GBPS sustained load levels, would consistently generate SRR's in a SCST based target implementation using our SCST/in-kernel Qlogic target interface driver. In response to an SRR the in-kernel Qlogic target driver immediately panics resulting in a catastrophic storage failure for serviced initiators. The culprit was a debug statement in the qla_target.c file which does not verify that a pointer to the SCSI CDB is not null. The unchecked pointer dereference results in the kernel panic and resultant system failure. The other two references to the SCSI CDB by the SRR handling code use a ternary operator to verify a non-null pointer is being acted on. This patch simply adds a similar test to the implicated debug statement. This patch is a candidate for any stable kernel being maintained since it addresses a potentially catastrophic event with minimal downside. Signed-off-by:
Dr. Greg Wettstein <greg@enjellic.com> Signed-off-by:
Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Olof Johansson authored
commit e306dfd0 upstream. The frame PC value in the unwind code used to just take the saved LR value and use that. That's incorrect as a stack trace, since it shows the return path stack, not the call path stack. In particular, it shows faulty information in case the bl is done as the very last instruction of one label, since the return point will be in the next label. That can easily be seen with tail calls to panic(), which is marked __noreturn and thus doesn't have anything useful after it. Easiest here is to just correct the unwind code and do a -4, to get the actual call site for the backtrace instead of the return site. Signed-off-by:
Olof Johansson <olof@lixom.net> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
James Hogan authored
commit f229006e upstream. Fix irq_set_affinity callbacks in the Meta IRQ chip drivers to AND cpu_online_mask into the cpumask when picking a CPU to vector the interrupt to. As Thomas pointed out, the /proc/irq/$N/smp_affinity interface doesn't filter out offline CPUs, so without this patch if you offline CPU0 and set an IRQ affinity to 0x3 it vectors the interrupt onto CPU0 even though it is offline. Reported-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
James Hogan <james.hogan@imgtec.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-metag@vger.kernel.org Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kirill A. Shutemov authored
commit 9845cbbd upstream. Masayoshi Mizuma reported a bug with the hang of an application under the memcg limit. It happens on write-protection fault to huge zero page If we successfully allocate a huge page to replace zero page but hit the memcg limit we need to split the zero page with split_huge_page_pmd() and fallback to small pages. The other part of the problem is that VM_FAULT_OOM has special meaning in do_huge_pmd_wp_page() context. __handle_mm_fault() expects the page to be split if it sees VM_FAULT_OOM and it will will retry page fault handling. This causes an infinite loop if the page was not split. do_huge_pmd_wp_zero_page_fallback() can return VM_FAULT_OOM if it failed to allocate one small page, so fallback to small pages will not help. The solution for this part is to replace VM_FAULT_OOM with VM_FAULT_FALLBACK is fallback required. Signed-off-by:
Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by:
Masayoshi Mizuma <m.mizuma@jp.fujitsu.com> Reviewed-by:
Michal Hocko <mhocko@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Charles Keepax authored
commit c4204960 upstream. snd_soc_dapm_sync takes the dapm_mutex internally, but we currently take it externally as well. This patch fixes this. Signed-off-by:
Charles Keepax <ckeepax@opensource.wolfsonmicro.com> Signed-off-by:
Mark Brown <broonie@linaro.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Davidlohr Bueso authored
commit f3713fd9 upstream. Commit 93e6f119 ("ipc/mqueue: cleanup definition names and locations") added global hardcoded limits to the amount of message queues that can be created. While these limits are per-namespace, reality is that it ends up breaking userspace applications. Historically users have, at least in theory, been able to create up to INT_MAX queues, and limiting it to just 1024 is way too low and dramatic for some workloads and use cases. For instance, Madars reports: "This update imposes bad limits on our multi-process application. As our app uses approaches that each process opens its own set of queues (usually something about 3-5 queues per process). In some scenarios we might run up to 3000 processes or more (which of-course for linux is not a problem). Thus we might need up to 9000 queues or more. All processes run under one user." Other affected users can be found in launchpad bug #1155695: https://bugs.launchpad.net/ubuntu/+source/manpages/+bug/1155695 Instead of increasing this limit, revert it entirely and fallback to the original way of dealing queue limits -- where once a user's resource limit is reached, and all memory is used, new queues cannot be created. Signed-off-by:
Davidlohr Bueso <davidlohr@hp.com> Reported-by:
Madars Vitolins <m@silodev.com> Acked-by:
Doug Ledford <dledford@redhat.com> Cc: Manfred Spraul <manfred@colorfullife.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Kara authored
commit 1362f4ea upstream. Currently last dqput() can race with dquot_scan_active() causing it to call callback for an already deactivated dquot. The race is as follows: CPU1 CPU2 dqput() spin_lock(&dq_list_lock); if (atomic_read(&dquot->dq_count) > 1) { - not taken if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) { spin_unlock(&dq_list_lock); ->release_dquot(dquot); if (atomic_read(&dquot->dq_count) > 1) - not taken dquot_scan_active() spin_lock(&dq_list_lock); if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) - not taken atomic_inc(&dquot->dq_count); spin_unlock(&dq_list_lock); - proceeds to release dquot ret = fn(dquot, priv); - called for inactive dquot Fix the problem by making sure possible ->release_dquot() is finished by the time we call the callback and new calls to it will notice reference dquot_scan_active() has taken and bail out. Signed-off-by:
Jan Kara <jack@suse.cz> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dan Williams authored
commit da87ca4d upstream. Since commit 77873803 "net_dma: mark broken" we no longer pin dma engines active for the network-receive-offload use case. As a result the ->free_chan_resources() that occurs after the driver self test no longer has a NET_DMA induced ->alloc_chan_resources() to back it up. A late firing irq can lead to ksoftirqd spinning indefinitely due to the tasklet_disable() performed by ->free_chan_resources(). Only ->alloc_chan_resources() can clear this condition in affected kernels. This problem has been present since commit 3e037454 "I/OAT: Add support for MSI and MSI-X" in 2.6.24, but is now exposed. Given the NET_DMA use case is deprecated we can revisit moving the driver to use threaded irqs. For now, just tear down the irq and tasklet properly by: 1/ Disable the irq from triggering the tasklet 2/ Disable the irq from re-arming 3/ Flush inflight interrupts 4/ Flush the timer 5/ Flush inflight tasklets References: https://lkml.org/lkml/2014/1/27/282 https://lkml.org/lkml/2014/2/19/672 Cc: Ingo Molnar <mingo@elte.hu> Cc: Steven Rostedt <rostedt@goodmis.org> Reported-by:
Mike Galbraith <bitbucket@online.de> Reported-by:
Stanislav Fomichev <stfomichev@yandex-team.ru> Tested-by:
Mike Galbraith <bitbucket@online.de> Tested-by:
Stanislav Fomichev <stfomichev@yandex-team.ru> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Dan Williams <dan.j.williams@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Paris authored
commit 9085a642 upstream. When writing policy via /sys/fs/selinux/policy I wrote the type and class of filename trans rules in CPU endian instead of little endian. On x86_64 this works just fine, but it means that on big endian arch's like ppc64 and s390 userspace reads the policy and converts it from le32_to_cpu. So the values are all screwed up. Write the values in le format like it should have been to start. Signed-off-by:
Eric Paris <eparis@redhat.com> Acked-by:
Stephen Smalley <sds@tycho.nsa.gov> Signed-off-by:
Paul Moore <pmoore@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Max Filippov authored
commit e2fd1374 upstream. Most in-kernel users want registers spilled on the kernel stack and don't require PS.EXCM to be set. That means that they don't need fixup routine and could reuse regular window overflow mechanism for that, which makes spill routine very simple. Suggested-by:
Chris Zankel <chris@zankel.net> Signed-off-by:
Max Filippov <jcmvbkbc@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Max Filippov authored
commit 3251f1e2 upstream. We need it saved because it contains a3 where we track which register windows we still need to spill, and fixup handler may call C exception handlers. Also fix comments. Signed-off-by:
Max Filippov <jcmvbkbc@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Andrew Lunn authored
commit d86e9af6 upstream. Enabling SPARSE_IRQ shows up a bug in the irq-orion bridge interrupt handler. The bridge interrupt is implemented using a single generic chip. Thus the parameter passed to irq_get_domain_generic_chip() should always be zero. Signed-off-by:
Andrew Lunn <andrew@lunn.ch> Acked-by:
Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Fixes: 9dbd90f1 ("irqchip: Add support for Marvell Orion SoCs") Signed-off-by:
Jason Cooper <jason@lakedaemon.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sebastian Hesselbarth authored
commit e0318ec3 upstream. Bridge IRQ_CAUSE bits are asserted regardless of the corresponding bit in IRQ_MASK register. To avoid interrupt events on stale irqs, we have to clear them before unmask. This installs an .irq_startup callback to ensure stale irqs are cleared before initial unmask. Signed-off-by:
Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Tested-by:
Ezequiel Garcia <ezequiel.garcia@free-electrons.com> Signed-off-by:
Jason Cooper <jason@lakedaemon.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sebastian Hesselbarth authored
commit 5f40067f upstream. Bridge irqs are edge-triggered, i.e. they get asserted on low-to-high transitions and not on the level of the downstream interrupt line. This replaces handle_level_irq by the more appropriate handle_edge_irq. Signed-off-by:
Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Tested-by:
Ezequiel Garcia <ezequiel.garcia@free-electrons.com> Signed-off-by:
Jason Cooper <jason@lakedaemon.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sebastian Hesselbarth authored
commit 7b119fd1 upstream. It is good practice to mask and clear pending irqs on init. We already mask all irqs, so also clear the bridge irq cause register. Signed-off-by:
Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Tested-by:
Ezequiel Garcia <ezequiel.garcia@free-electrons.com> Signed-off-by:
Jason Cooper <jason@lakedaemon.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit 37c367ec upstream. HP Folio 13 may have a broken BIOS that doesn't set up the mute LED GPIO properly, and the driver guesses it wrongly, too. Add a new fixup entry for setting the GPIO pin statically for this laptop. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=70991Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Peter Zijlstra authored
commit e3703f8c upstream. Drew Richardson reported that he could make the kernel go *boom* when hotplugging while having perf events active. It turned out that when you have a group event, the code in __perf_event_exit_context() fails to remove the group siblings from the context. We then proceed with destroying and freeing the event, and when you re-plug the CPU and try and add another event to that CPU, things go *boom* because you've still got dead entries there. Reported-by:
Drew Richardson <drew.richardson@arm.com> Signed-off-by:
Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/n/tip-k6v5wundvusvcseqj1si0oz0@git.kernel.orgSigned-off-by:
Ingo Molnar <mingo@kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-