- 12 Mar, 2014 8 commits
-
-
Anssi Hannula authored
commit 11f7c52d upstream. hdmi_manual_setup_channel_mapping() and hdmi_std_setup_channel_mapping try to assign ALSA channels to HDMI channel slots and disable (i.e. silence) other slots. However, they try to disable a slot by using AC_VERB_SET_CHAN_SLOT with parameter ((alsa_ch << 8) | 0xf), while the correct parameter is ((0xf << 8) | hdmi_slot), i.e. the slot should be unassigned, not the ALSA channel. Fix that by actually disabling the unused slots. Note that this bug did not cause any (reported) issues because slots incorrectly having audio are normally ignored by a receiver if the CEA channel allocation used does not map that slot to any speaker. Additionally, the converter channel count configuration limits the number of actually active channels in any case. Signed-off-by: Anssi Hannula <anssi.hannula@iki.fi> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Anssi Hannula authored
commit 1df5a06a upstream. Currently the converter channel count is set to the number of actual input channels. The audio infoframe channel count field is set similarly. However, sometimes the used channel map does not map all input channels to outputs. Notably, 3 channel modes (e.g. 2.1) require a dummy input channel so there are 4 input channels. According to the HDA specification, converter channel count should be programmed according to the number of _active_ channels. On Intel HDMI codecs (but not on NVIDIA), setting the converter channel to a higher value than there are actually mapped channels to HDMI slots will cause no audio to be output at all. Note that the effects of this issue are currently partially masked by other bugs that prevent the driver from actually unmapping channels in certain cases. For example, if a 4 channel stream is first created and prepared, it gets a FL,FR,RL,RR mapping (ALSA->HDMI slot mapping 0->0, 1->1, 2->4, 3->5). If one thereafter assigns a FR,FL,FC mapping to it, the driver will remap 2->3 but fail to unmap 2->4 and 3->5, so there are still 4 active channels and the issue will not trigger in this case. These bugs will be fixed separately. Fix the channel counts in the converter channel count field and in the audio infoframe channel count field to match the actual number of active channels. Signed-off-by: Anssi Hannula <anssi.hannula@iki.fi> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Anssi Hannula authored
commit 90f28002 upstream. hdmi_std_setup_channel_mapping() selects a Channel Allocation according to the sink reported speaker mask, preferring the ALSA standard layouts. If the channel allocation is not one of the ALSA standard layouts, the ALSA channels are mapped directly to HDMI channels in order. However, the function does not take into account that there a holes in the HDMI channel map. Additionally, the function tries to disable a slot by using AC_VERB_SET_CHAN_SLOT with parameter ((alsa_ch << 8) | 0xf), while the correct parameter is ((0xf << 8) | hdmi_slot), i.e. the slot should be unassigned, not the ALSA channel. Fix both of the issues for non-ALSA-default layouts. Tested on Intel HDMI with a speaker mask of FL | FR | FC | RC, which causes CA 0x06 to be selected for 4-channel audio, which causes incorrect output (sound destined to RC goes to FC and FC goes nowhere) without the patch. Signed-off-by: Anssi Hannula <anssi.hannula@iki.fi> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Gerald Schaefer authored
commit b7c5b1aa upstream. Commit 27f6b416 "s390/vtimer: rework virtual timer interface" removed the call to init_virt_timer() by mistake, which is added again by this patch. Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Martin Schwidefsky authored
commit 8adbf78e upstream. Git commit 4f37a68c "s390: Use direct ktime path for s390 clockevent device" makes use of the CLOCK_EVT_FEAT_KTIME clockevent option to avoid the delta calculation with ktime_get() in clockevents_program_event and the get_tod_clock() in s390_next_event. This is based on the assumption that the difference between the internal ktime and the hardware clock is reflected in the wall_to_monotonic delta. But this is not true, the ntp corrections are applied via changes to the tk->mult multiplier and this is not reflected in wall_to_monotonic. In theory this could be solved by using the raw monotonic clock but it is simpler to switch back to the standard clock delta calculation. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Martin Schwidefsky authored
commit 79c74ecb upstream. Switch to the improved update_vsyscall interface that provides sub-nanosecond precision for gettimeofday and clock_gettime. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Hendrik Brueckner authored
commit d1e61fe4 upstream. Unloading the fs3270 kernel module does not remove the created "3270/tub" device. Reloading the module then causes a sysfs warning: "sysfs: cannot create duplicate filename '/devices/virtual/3270/3270!tub'". Call device_destroy() in the module exit function to solve this issue. Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Felipe Contreras authored
commit b4cb9244 upstream. More people have reported they need this for their machines to work correctly. References: https://bugzilla.kernel.org/show_bug.cgi?id=60682Reported-by: Stefan Hellermann <bugzilla.kernel.org@the2masters.de> Reported-by: Benedikt Sauer <filmor@gmail.com> Reported-by: Erno Kuusela <erno@iki.fi> Reported-by: Jonathan Doman <jonathan.doman@gmail.com> Reported-by: Christoph Klaffl <christophklaffl@gmail.com> Reported-by: Jan Hendrik Nielsen <jan.hendrik.nielsen@informatik.hu-berlin.de> Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
- 10 Mar, 2014 1 commit
-
-
Jiri Slaby authored
-
- 05 Mar, 2014 31 commits
-
-
Jani Nikula authored
commit f51a44b9 upstream. Retrying indefinitely places too much trust on the aux implementation of the sink devices. Reported-by: Daniel Martin <consume.noise@gmail.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=71267Signed-off-by: Jani Nikula <jani.nikula@intel.com> Tested-by: Theodore Ts'o <tytso@mit.edu> Tested-by: Sree Harsha Totakura <freedesktop@h.totakura.in> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Jani Nikula authored
commit 04eada25 upstream. Give more slack to sink devices before retrying on native aux defer. AFAICT the 100 us timeout was not based on the DP spec. Signed-off-by: Jani Nikula <jani.nikula@intel.com> Cc: stable@vger.kernel.org (on Jani's request) Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Jerome Glisse authored
commit d9654413 upstream. Need to free the uvd ring. Also reshuffle gart tear down to happen after uvd tear down. Signed-off-by: Jérôme Glisse <jglisse@redhat.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Alex Deucher authored
commit 9ef4e1d0 upstream. Causes display problems. We had already disabled sharing for non-DP displays. Based on a patch from: Niels Ole Salscheider <niels_ole@salscheider-online.de> bug: https://bugzilla.kernel.org/show_bug.cgi?id=58121Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Christian König authored
commit 5e386b57 upstream. Otherwise we might get a crash here. Signed-off-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Alex Deucher authored
commit 9f050c7f upstream. Print the supported functions mask in addition to the version. This is useful in debugging PX problems since we can see what functions are available. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Alex Deucher authored
commit d7eb0a09 upstream. Properly clear the enable bit when audio disable is requested. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Mike Snitzer authored
commit 1acacc07 upstream. dm_pool_close_thin_device() must be called if dm_set_target_max_io_len() fails in thin_ctr(). Otherwise __pool_destroy() will fail because the pool will still have an open thin device: device-mapper: thin metadata: attempt to close pmd when 1 device(s) are still open device-mapper: thin: __pool_destroy: dm_pool_metadata_close() failed. Also, must establish error code if failing thin_ctr() because the pool is in fail_io mode. Signed-off-by: Mike Snitzer <snitzer@redhat.com> Acked-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Mike Snitzer authored
commit 4d1662a3 upstream. Commit 905e51b3 ("dm thin: commit outstanding data every second") introduced a periodic commit. This commit occurs regardless of whether any thin devices have made changes. Fix the periodic commit to check if any of a pool's thin devices have changed using dm_pool_changed_this_transaction(). Reported-by: Alexander Larsson <alexl@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Acked-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Hannes Reinecke authored
commit a1989b33 upstream. An invalid ioctl will never be valid, irrespective of whether multipath has active paths or not. So for invalid ioctls we do not have to wait for multipath to activate any paths, but can rather return an error code immediately. This fix resolves numerous instances of: udevd[]: worker [] unexpectedly returned with status 0x0100 that have been seen during testing. Signed-off-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Linus Walleij authored
commit e9baa9d9 upstream. It appears that in the DMA40 driver the DMA tasklet will very often dereference memory for a descriptor just free:d from the DMA40 slab. Nothing happens because no other part of the driver has yet had a chance to claim this memory, but it's really nasty to dereference free:d memory, so let's check the flag before the descriptor is free and store it in a bool variable. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Sebastian Capella authored
commit f8d5b9e9 upstream. During restore, pm_notifier chain are called with PM_RESTORE_PREPARE. The firmware_class driver handler fw_pm_notify does not have a handler for this. As a result, it keeps a reader on the kmod.c umhelper_sem. During freeze_processes, the call to __usermodehelper_disable tries to take a write lock on this semaphore and hangs waiting. Signed-off-by: Sebastian Capella <sebastian.capella@linaro.org> Acked-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Jean Delvare authored
commit 75135da0 upstream. pci_get_device() decrements the reference count of "from" (last argument) so when we break off the loop successfully we have only one device reference - and we don't know which device we have. If we want a reference to each device, we must take them explicitly and let the pci_get_device() walk complete to avoid duplicate references. This is serious, as over-putting device references will cause the device to eventually disappear. Without this fix, the kernel crashes after a few insmod/rmmod cycles. Tested on an Intel S7000FC4UR system with a 7300 chipset. Signed-off-by: Jean Delvare <jdelvare@suse.de> Link: http://lkml.kernel.org/r/20140224111656.09bbb7ed@endymion.delvare Cc: Mauro Carvalho Chehab <m.chehab@samsung.com> Cc: Doug Thompson <dougthompson@xmission.com> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Dr. Greg Wettstein authored
commit 6f58c780 upstream. A selective retransmission request (SRR) is a fibre-channel protocol control request which provides support for requesting retransmission of a data sequence in response to an issue such as frame loss or corruption. These events are experienced infrequently in fibre-channel based networks which makes it difficult to test and assess codepaths which handle these events. We were fortunate enough, for some definition of fortunate, to have a metro-area single-mode SAN link which, at 10 GBPS sustained load levels, would consistently generate SRR's in a SCST based target implementation using our SCST/in-kernel Qlogic target interface driver. In response to an SRR the in-kernel Qlogic target driver immediately panics resulting in a catastrophic storage failure for serviced initiators. The culprit was a debug statement in the qla_target.c file which does not verify that a pointer to the SCSI CDB is not null. The unchecked pointer dereference results in the kernel panic and resultant system failure. The other two references to the SCSI CDB by the SRR handling code use a ternary operator to verify a non-null pointer is being acted on. This patch simply adds a similar test to the implicated debug statement. This patch is a candidate for any stable kernel being maintained since it addresses a potentially catastrophic event with minimal downside. Signed-off-by: Dr. Greg Wettstein <greg@enjellic.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Olof Johansson authored
commit e306dfd0 upstream. The frame PC value in the unwind code used to just take the saved LR value and use that. That's incorrect as a stack trace, since it shows the return path stack, not the call path stack. In particular, it shows faulty information in case the bl is done as the very last instruction of one label, since the return point will be in the next label. That can easily be seen with tail calls to panic(), which is marked __noreturn and thus doesn't have anything useful after it. Easiest here is to just correct the unwind code and do a -4, to get the actual call site for the backtrace instead of the return site. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
James Hogan authored
commit f229006e upstream. Fix irq_set_affinity callbacks in the Meta IRQ chip drivers to AND cpu_online_mask into the cpumask when picking a CPU to vector the interrupt to. As Thomas pointed out, the /proc/irq/$N/smp_affinity interface doesn't filter out offline CPUs, so without this patch if you offline CPU0 and set an IRQ affinity to 0x3 it vectors the interrupt onto CPU0 even though it is offline. Reported-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-metag@vger.kernel.org Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Kirill A. Shutemov authored
commit 9845cbbd upstream. Masayoshi Mizuma reported a bug with the hang of an application under the memcg limit. It happens on write-protection fault to huge zero page If we successfully allocate a huge page to replace zero page but hit the memcg limit we need to split the zero page with split_huge_page_pmd() and fallback to small pages. The other part of the problem is that VM_FAULT_OOM has special meaning in do_huge_pmd_wp_page() context. __handle_mm_fault() expects the page to be split if it sees VM_FAULT_OOM and it will will retry page fault handling. This causes an infinite loop if the page was not split. do_huge_pmd_wp_zero_page_fallback() can return VM_FAULT_OOM if it failed to allocate one small page, so fallback to small pages will not help. The solution for this part is to replace VM_FAULT_OOM with VM_FAULT_FALLBACK is fallback required. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com> Reviewed-by: Michal Hocko <mhocko@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Charles Keepax authored
commit c4204960 upstream. snd_soc_dapm_sync takes the dapm_mutex internally, but we currently take it externally as well. This patch fixes this. Signed-off-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@linaro.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Davidlohr Bueso authored
commit f3713fd9 upstream. Commit 93e6f119 ("ipc/mqueue: cleanup definition names and locations") added global hardcoded limits to the amount of message queues that can be created. While these limits are per-namespace, reality is that it ends up breaking userspace applications. Historically users have, at least in theory, been able to create up to INT_MAX queues, and limiting it to just 1024 is way too low and dramatic for some workloads and use cases. For instance, Madars reports: "This update imposes bad limits on our multi-process application. As our app uses approaches that each process opens its own set of queues (usually something about 3-5 queues per process). In some scenarios we might run up to 3000 processes or more (which of-course for linux is not a problem). Thus we might need up to 9000 queues or more. All processes run under one user." Other affected users can be found in launchpad bug #1155695: https://bugs.launchpad.net/ubuntu/+source/manpages/+bug/1155695 Instead of increasing this limit, revert it entirely and fallback to the original way of dealing queue limits -- where once a user's resource limit is reached, and all memory is used, new queues cannot be created. Signed-off-by: Davidlohr Bueso <davidlohr@hp.com> Reported-by: Madars Vitolins <m@silodev.com> Acked-by: Doug Ledford <dledford@redhat.com> Cc: Manfred Spraul <manfred@colorfullife.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Jan Kara authored
commit 1362f4ea upstream. Currently last dqput() can race with dquot_scan_active() causing it to call callback for an already deactivated dquot. The race is as follows: CPU1 CPU2 dqput() spin_lock(&dq_list_lock); if (atomic_read(&dquot->dq_count) > 1) { - not taken if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) { spin_unlock(&dq_list_lock); ->release_dquot(dquot); if (atomic_read(&dquot->dq_count) > 1) - not taken dquot_scan_active() spin_lock(&dq_list_lock); if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) - not taken atomic_inc(&dquot->dq_count); spin_unlock(&dq_list_lock); - proceeds to release dquot ret = fn(dquot, priv); - called for inactive dquot Fix the problem by making sure possible ->release_dquot() is finished by the time we call the callback and new calls to it will notice reference dquot_scan_active() has taken and bail out. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Dan Williams authored
commit da87ca4d upstream. Since commit 77873803 "net_dma: mark broken" we no longer pin dma engines active for the network-receive-offload use case. As a result the ->free_chan_resources() that occurs after the driver self test no longer has a NET_DMA induced ->alloc_chan_resources() to back it up. A late firing irq can lead to ksoftirqd spinning indefinitely due to the tasklet_disable() performed by ->free_chan_resources(). Only ->alloc_chan_resources() can clear this condition in affected kernels. This problem has been present since commit 3e037454 "I/OAT: Add support for MSI and MSI-X" in 2.6.24, but is now exposed. Given the NET_DMA use case is deprecated we can revisit moving the driver to use threaded irqs. For now, just tear down the irq and tasklet properly by: 1/ Disable the irq from triggering the tasklet 2/ Disable the irq from re-arming 3/ Flush inflight interrupts 4/ Flush the timer 5/ Flush inflight tasklets References: https://lkml.org/lkml/2014/1/27/282 https://lkml.org/lkml/2014/2/19/672 Cc: Ingo Molnar <mingo@elte.hu> Cc: Steven Rostedt <rostedt@goodmis.org> Reported-by: Mike Galbraith <bitbucket@online.de> Reported-by: Stanislav Fomichev <stfomichev@yandex-team.ru> Tested-by: Mike Galbraith <bitbucket@online.de> Tested-by: Stanislav Fomichev <stfomichev@yandex-team.ru> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Eric Paris authored
commit 9085a642 upstream. When writing policy via /sys/fs/selinux/policy I wrote the type and class of filename trans rules in CPU endian instead of little endian. On x86_64 this works just fine, but it means that on big endian arch's like ppc64 and s390 userspace reads the policy and converts it from le32_to_cpu. So the values are all screwed up. Write the values in le format like it should have been to start. Signed-off-by: Eric Paris <eparis@redhat.com> Acked-by: Stephen Smalley <sds@tycho.nsa.gov> Signed-off-by: Paul Moore <pmoore@redhat.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Max Filippov authored
commit e2fd1374 upstream. Most in-kernel users want registers spilled on the kernel stack and don't require PS.EXCM to be set. That means that they don't need fixup routine and could reuse regular window overflow mechanism for that, which makes spill routine very simple. Suggested-by: Chris Zankel <chris@zankel.net> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Max Filippov authored
commit 3251f1e2 upstream. We need it saved because it contains a3 where we track which register windows we still need to spill, and fixup handler may call C exception handlers. Also fix comments. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Andrew Lunn authored
commit d86e9af6 upstream. Enabling SPARSE_IRQ shows up a bug in the irq-orion bridge interrupt handler. The bridge interrupt is implemented using a single generic chip. Thus the parameter passed to irq_get_domain_generic_chip() should always be zero. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Fixes: 9dbd90f1 ("irqchip: Add support for Marvell Orion SoCs") Signed-off-by: Jason Cooper <jason@lakedaemon.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Sebastian Hesselbarth authored
commit e0318ec3 upstream. Bridge IRQ_CAUSE bits are asserted regardless of the corresponding bit in IRQ_MASK register. To avoid interrupt events on stale irqs, we have to clear them before unmask. This installs an .irq_startup callback to ensure stale irqs are cleared before initial unmask. Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Tested-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com> Signed-off-by: Jason Cooper <jason@lakedaemon.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Sebastian Hesselbarth authored
commit 5f40067f upstream. Bridge irqs are edge-triggered, i.e. they get asserted on low-to-high transitions and not on the level of the downstream interrupt line. This replaces handle_level_irq by the more appropriate handle_edge_irq. Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Tested-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com> Signed-off-by: Jason Cooper <jason@lakedaemon.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Sebastian Hesselbarth authored
commit 7b119fd1 upstream. It is good practice to mask and clear pending irqs on init. We already mask all irqs, so also clear the bridge irq cause register. Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Tested-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com> Signed-off-by: Jason Cooper <jason@lakedaemon.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Takashi Iwai authored
commit 37c367ec upstream. HP Folio 13 may have a broken BIOS that doesn't set up the mute LED GPIO properly, and the driver guesses it wrongly, too. Add a new fixup entry for setting the GPIO pin statically for this laptop. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=70991Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Peter Zijlstra authored
commit e3703f8c upstream. Drew Richardson reported that he could make the kernel go *boom* when hotplugging while having perf events active. It turned out that when you have a group event, the code in __perf_event_exit_context() fails to remove the group siblings from the context. We then proceed with destroying and freeing the event, and when you re-plug the CPU and try and add another event to that CPU, things go *boom* because you've still got dead entries there. Reported-by: Drew Richardson <drew.richardson@arm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/n/tip-k6v5wundvusvcseqj1si0oz0@git.kernel.orgSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Will Deacon authored
commit 57ca90f6 upstream. Whilst trying to bring-up an SMMUv2 implementation with the table walker plumbed into a coherent interconnect, I noticed that the memory transactions targetting the CPU caches from the SMMU were marked as outer-shareable instead of inner-shareable. After a bunch of digging, it seems that we actually need to program CBARn.BPSHCFG for s1-s2-bypass contexts to act as non-shareable in order for the shareability configured in the corresponding TTBCR not to be overridden with an outer-shareable attribute. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-