- 06 May, 2015 3 commits
-
-
Yves-Alexis Perez authored
commit c0278669 upstream. This model uses the same dock port as the previous generation. Signed-off-by: Yves-Alexis Perez <corsac@debian.org> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Thierry Reding authored
commit 5e43e259 upstream. The number of resets controls is 32 times the number of peripheral register banks rather than 32 times the number of clocks. This reduces (drastically) the number of reset controls registered from 10080 (315 clocks * 32) to 224 (6 peripheral register banks * 32). This also fixes a potential crash because trying to use any of the excess reset controls (224-10079) would have caused accesses beyond the array bounds of the peripheral register banks definition array. Cc: Peter De Schrijver <pdeschrijver@nvidia.com> Cc: Prashant Gaikwad <pgaikwad@nvidia.com> Fixes: 6d5b988e ("clk: tegra: implement a reset driver") Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Markos Chandras authored
commit f7f8aea4 upstream. memsize denotes the amount of RAM we can access from kseg{0,1} and that should be up to 256M. In case the bootloader reports a value higher than that (perhaps reporting all the available RAM) it's best if we fix it ourselves and just warn the user about that. This is usually a problem with the bootloader and/or its environment. [ralf@linux-mips.org: Remove useless parens as suggested bei Sergei. Reformat long pr_warn statement to fit into 80 column limit.] Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/9362/Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
- 05 May, 2015 37 commits
-
-
Markos Chandras authored
commit 60cd7e08 upstream. Introduce new macros for kernel load/store variants which will be used to perform regular kernel space load/store operations in EVA mode. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/9500/Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Michael Gernoth authored
commit 91bf0c2d upstream. The functions snd_emu10k1_proc_spdif_read and snd_emu1010_fpga_read acquire the emu_lock before accessing the FPGA. The function used to access the FPGA (snd_emu1010_fpga_read) also tries to take the emu_lock which causes a deadlock. Remove the outer locking in the proc-functions (guarding only the already safe fpga read) to prevent this deadlock. [removed superfluous flags variables too -- tiwai] Signed-off-by: Michael Gernoth <michael@gernoth.net> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
NeilBrown authored
commit 47d68979 upstream. Since commit 20d0189b in v3.14-rc1 RAID0 has performed incorrect calculations when the chunksize is not a power of 2. This happens because "sector_div()" modifies its first argument, but this wasn't taken into account in the patch. So restore that first arg before re-using the variable. Reported-by: Joe Landman <joe.landman@gmail.com> Reported-by: Dave Chinner <david@fromorbit.com> Fixes: 20d0189bSigned-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
K. Y. Srinivasan authored
commit 8de58074 upstream. We may exit this function without properly freeing up the maapings we may have acquired. Fix the bug. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Long Li <longli@microsoft.com> Signed-off-by: James Bottomley <JBottomley@Odin.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Sifan Naeem authored
commit 80ccf4ad upstream. img_ir_remove() passes a pointer to the ISR function as the 2nd parameter to irq_free() instead of a pointer to the device data structure. This issue causes unloading img-ir module to fail with the below warning after building and loading img-ir as a module. WARNING: CPU: 2 PID: 155 at ../kernel/irq/manage.c:1278 __free_irq+0xb4/0x214() Trying to free already-free IRQ 58 Modules linked in: img_ir(-) CPU: 2 PID: 155 Comm: rmmod Not tainted 3.14.0 #55 ... Call Trace: ... [<8048d420>] __free_irq+0xb4/0x214 [<8048d6b4>] free_irq+0xac/0xf4 [<c009b130>] img_ir_remove+0x54/0xd4 [img_ir] [<8073ded0>] platform_drv_remove+0x30/0x54 ... Fixes: 160a8f8a ("[media] rc: img-ir: add base driver") Signed-off-by: Sifan Naeem <sifan.naeem@imgtec.com> Acked-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Gregory CLEMENT authored
commit 61819549 upstream. Level IRQ handlers and edge IRQ handler are managed by tow different sets of registers. But currently the driver uses the same mask for the both registers. It lead to issues with the following scenario: First, an IRQ is requested on a GPIO to be triggered on front. After, this an other IRQ is requested for a GPIO of the same bank but triggered on level. Then the first one will be also setup to be triggered on level. It leads to an interrupt storm. The different kind of handler are already associated with two different irq chip type. With this patch the driver uses a private mask for each one which solves this issue. It has been tested on an Armada XP based board and on an Armada 375 board. For the both boards, with this patch is applied, there is no such interrupt storm when running the previous scenario. This bug was already fixed but in a different way in the legacy version of this driver by Evgeniy Dushistov: 9ece8839 "ARM: orion: Fix for certain sequence of request_irq can cause irq storm". The fact the new version of the gpio drive could be affected had been discussed there: http://thread.gmane.org/gmane.linux.ports.arm.kernel/344670/focus=364012Reported-by: Evgeniy A. Dushistov <dushistov@mail.ru> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Radim Krčmář authored
commit 80f7fdb1 upstream. If we were migrated right after __getcpu, but before reading the migration_count, we wouldn't notice that we read TSC of a different VCPU, nor that KVM's bug made pvti invalid, as only migration_count on source VCPU is increased. Change vdso instead of updating migration_count on destination. Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> Fixes: 0a4e6be9 ("x86: kvm: Revert "remove sched notifier for cross-cpu migrations"") Message-Id: <1428000263-11892-1-git-send-email-rkrcmar@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Sagi Grimberg authored
commit 4a579da2 upstream. Before we reach to connection established we may get an error event. In this case the core won't teardown this connection (never established it), so we take care of freeing it ourselves. Signed-off-by: Sagi Grimberg <sagig@mellanox.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> [ luis: backported to 3.16: adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Sagi Grimberg authored
commit 364189f0 upstream. This hang was a result of a missing command put when a DIF error occurred during a rdma read (and we sent an CHECK_CONDITION error without passing it to the backend). Signed-off-by: Sagi Grimberg <sagig@mellanox.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Felipe Balbi authored
commit bbc78c07 upstream. Make sure we're using the new macro, so our resume signaling will always pass certification. Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Felipe Balbi authored
commit 59c9904c upstream. Make sure we're using the new macro, so our resume signaling will always pass certification. Signed-off-by: Felipe Balbi <balbi@ti.com> [ luis: backported to 3.16: - file rename: drivers/usb/isp1760/isp1760-hcd.c -> drivers/usb/host/isp1760-hcd.c ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Felipe Balbi authored
commit 74bd7b69 upstream. Make sure we're using the new macro, so our resume signaling will always pass certification. Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Felipe Balbi authored
commit 08debfb1 upstream. Make sure we're using the new macro, so our resume signaling will always pass certification. Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Felipe Balbi authored
commit 7a606ac2 upstream. While this driver was already using a 50ms resume timeout, let's make sure everybody uses the same macro so it's easy to fix later should anything go wrong. It also gives a more "stable" expectation to Linux users. Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Felipe Balbi authored
commit 84c0d178 upstream. Make sure we're using the new macro, so our resume signaling will always pass certification. Signed-off-by: Felipe Balbi <balbi@ti.com> [ luis: backported to 3.16: - replaced 'hub_wq' by 'khubd' in comment ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Felipe Balbi authored
commit 595227db upstream. Make sure we're using the new macro, so our resume signaling will always pass certification. Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Felipe Balbi authored
commit 7e136bb7 upstream. Make sure we're using the new macro, so our resume signaling will always pass certification. Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Felipe Balbi authored
commit 8c0ae657 upstream. Make sure we're using the new macro, so our resume signaling will always pass certification. Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Felipe Balbi authored
commit 309be239 upstream. Make sure we're using the new macro, so our resume signaling will always pass certification. Based on original work by Bin Liu <Bin Liu <b-liu@ti.com>> Cc: Bin Liu <b-liu@ti.com> Signed-off-by: Felipe Balbi <balbi@ti.com> [ luis: backported to 3.16: - only two msecs_to_jiffies() invocations - adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Felipe Balbi authored
commit b8fb6f79 upstream. Make sure we're using the new macro, so our resume signaling will always pass certification. Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Felipe Balbi authored
commit ea16328f upstream. Make sure we're using the new macro, so our resume signaling will always pass certification. Signed-off-by: Felipe Balbi <balbi@ti.com> [ luis: backported to 3.16: - replaced 'hub_wq' by 'khubd' in comment ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Felipe Balbi authored
commit b9e45188 upstream. Make sure we're using the new macro, so our resume signaling will always pass certification. Acked-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Felipe Balbi authored
commit 62f0342d upstream. Every USB Host controller should use this new macro to define for how long resume signalling should be driven on the bus. Currently, almost every single USB controller is using a 20ms timeout for resume signalling. That's problematic for two reasons: a) sometimes that 20ms timer expires a little before 20ms, which makes us fail certification b) some (many) devices actually need more than 20ms resume signalling. Sure, in case of (b) we can state that the device is against the USB spec, but the fact is that we have no control over which device the certification lab will use. We also have no control over which host they will use. Most likely they'll be using a Windows PC which, again, we have no control over how that USB stack is written and how long resume signalling they are using. At the end of the day, we must make sure Linux passes electrical compliance when working as Host or as Device and currently we don't pass compliance as host because we're driving resume signallig for exactly 20ms and that confuses certification test setup resulting in Certification failure. Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by: Peter Chen <peter.chen@freescale.com> Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Ulrik De Bie authored
commit bd884149 upstream. On ASUS TP500LN and X750JN, the touchpad absolute mode is reset each time set_rate is done. In order to fix this, we will verify the firmware version, and if it matches the one in those laptops, the set_rate function is overloaded with a function elantech_set_rate_restore_reg_07 that performs the set_rate with the original function, followed by a restore of reg_07 (the register that sets the absolute mode on elantech v4 hardware). Also the ASUS TP500LN and X750JN firmware version, capabilities, and button constellation is added to elantech.c Reported-and-tested-by: George Moutsopoulos <gmoutso@yahoo.co.uk> Signed-off-by: Ulrik De Bie <ulrik.debie-os@e2big.org> Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Lukas Czerner authored
commit e12fb972 upstream. Previously commit 14ece102 added a support for for syncing parent directory of newly created inodes to make sure that the inode is not lost after a power failure in no-journal mode. However this does not work in majority of cases, namely: - if the directory has inline data - if the directory is already indexed - if the directory already has at least one block and: - the new entry fits into it - or we've successfully converted it to indexed So in those cases we might lose the inode entirely even after fsync in the no-journal mode. This also includes ext2 default mode obviously. I've noticed this while running xfstest generic/321 and even though the test should fail (we need to run fsck after a crash in no-journal mode) I could not find a newly created entries even when if it was fsynced before. Fix this by adjusting the ext4_add_entry() successful exit paths to set the inode EXT4_STATE_NEWENTRY so that fsync has the chance to fsync the parent directory as well. Signed-off-by: Lukas Czerner <lczerner@redhat.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Jan Kara <jack@suse.cz> Cc: Frank Mayhar <fmayhar@google.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Eric W. Biederman authored
commit e819f152 upstream. - Remove the unneeded declaration from pnode.h - Mark umount_tree static as it has no callers outside of namespace.c - Define an enumeration of umount_tree's flags. - Pass umount_tree's flags in by name This removes the magic numbers 0, 1 and 2 making the code a little clearer and makes it possible for there to be lazy unmounts that don't propagate. Which is what __detach_mounts actually wants for example. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> [ luis: backported to 3.16: - dropped change to __detach_mounts() ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Ezequiel Garcia authored
commit aeff0927 upstream. The available (i.e. not used) buffers are returned by stk1160_clear_queue(), on the stop_streaming() path. However, this is insufficient and the current buffer must be released as well. Fix it. Signed-off-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar> Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Pascal Huerst authored
commit 74ff9602 upstream. The delay time after a reset in the codec probe callback was too short, and did not work on certain hw because the codec needs more time to power on. This increases the delay time from 1us to 1ms. Signed-off-by: Pascal Huerst <pascal.huerst@gmail.com> Acked-by: Brian Austin <brian.austin@cirrus.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Huacai Chen authored
commit 2a21dc7c upstream. We found that TLB mismatch not only happens after kernel resume, but also happens during snapshot restore. So move it to the beginning of swsusp_arch_suspend(). Signed-off-by: Huacai Chen <chenhc@lemote.com> Cc: Steven J. Hill <Steven.Hill@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Patchwork: https://patchwork.linux-mips.org/patch/9621/Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Andrey Ryabinin authored
commit 8defb336 upstream. Usually ELF_ET_DYN_BASE is 2/3 of TASK_SIZE. With 3G/1G user/kernel split this is not so, because 2*TASK_SIZE overflows 32 bits, so the actual value of ELF_ET_DYN_BASE is: (2 * TASK_SIZE / 3) = 0x2a000000 When ASLR is disabled PIE binaries will load at ELF_ET_DYN_BASE address. On 32bit platforms AddressSanitzer uses addresses [0x20000000 - 0x40000000] for shadow memory [1]. So ASan doesn't work for PIE binaries when ASLR disabled as it fails to map shadow memory. Also after Kees's 'split ET_DYN ASLR from mmap ASLR' patchset PIE binaries has a high chance of loading somewhere in between [0x2a000000 - 0x40000000] even if ASLR enabled. This makes ASan with PIE absolutely incompatible. Fix overflow by dividing TASK_SIZE prior to multiplying. After this patch ELF_ET_DYN_BASE equals to (for CONFIG_VMSPLIT_3G=y): (TASK_SIZE / 3 * 2) = 0x7f555554 [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerAlgorithm#MappingSigned-off-by: Andrey Ryabinin <a.ryabinin@samsung.com> Reported-by: Maria Guseva <m.guseva@samsung.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
James Hogan authored
commit 98119ad5 upstream. Guest user mode can generate a guest MSA Disabled exception on an MSA capable core by simply trying to execute an MSA instruction. Since this exception is unknown to KVM it will be passed on to the guest kernel. However guest Linux kernels prior to v3.15 do not set up an exception handler for the MSA Disabled exception as they don't support any MSA capable cores. This results in a guest OS panic. Since an older processor ID may be being emulated, and MSA support is not advertised to the guest, the correct behaviour is to generate a Reserved Instruction exception in the guest kernel so it can send the guest process an illegal instruction signal (SIGILL), as would happen with a non-MSA-capable core. Fix this as minimally as reasonably possible by preventing kvm_mips_check_privilege() from relaying MSA Disabled exceptions from guest user mode to the guest kernel, and handling the MSA Disabled exception by emulating a Reserved Instruction exception in the guest, via a new handle_msa_disabled() KVM callback. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Gleb Natapov <gleb@kernel.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org [ luis: backported to 3.16: files renamed: - arch/mips/kvm/emulate.c -> arch/mips/kvm/kvm_mips_emul.c - arch/mips/kvm/mips.c -> arch/mips/kvm/kvm_mips.c] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
David Sterba authored
commit 3c3b04d1 upstream. Due to insufficient check in btrfs_is_valid_xattr, this unexpectedly works: $ touch file $ setfattr -n user. -v 1 file $ getfattr -d file user.="1" ie. the missing attribute name after the namespace. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=94291Reported-by: William Douglas <william.douglas@intel.com> Signed-off-by: David Sterba <dsterba@suse.cz> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Filipe Manana authored
commit dcc82f47 upstream. While committing a transaction we free the log roots before we write the new super block. Freeing the log roots implies marking the disk location of every node/leaf (metadata extent) as pinned before the new super block is written. This is to prevent the disk location of log metadata extents from being reused before the new super block is written, otherwise we would have a corrupted log tree if before the new super block is written a crash/reboot happens and the location of any log tree metadata extent ended up being reused and rewritten. Even though we pinned the log tree's metadata extents, we were issuing a discard against them if the fs was mounted with the -o discard option, resulting in corruption of the log tree if a crash/reboot happened before writing the new super block - the next time the fs was mounted, during the log replay process we would find nodes/leafs of the log btree with a content full of zeroes, causing the process to fail and require the use of the tool btrfs-zero-log to wipeout the log tree (and all data previously fsynced becoming lost forever). Fix this by not doing a discard when pinning an extent. The discard will be done later when it's safe (after the new super block is committed) at extent-tree.c:btrfs_finish_extent_commit(). Fixes: e688b725 (Btrfs: fix extent pinning bugs in the tree log) Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
K. Y. Srinivasan authored
commit 73cffdb6 upstream. Don't wait after sending request for offers to the host. This wait is unnecessary and simply adds 5 seconds to the boot time. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> [ luis: backported to 3.16: adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Nicholas Bellinger authored
commit 88dcd2da upstream. This patch converts iscsi-target code to use modern kthread.h API callers for creating RX/TX threads for each new iscsi_conn descriptor, and releasing associated RX/TX threads during connection shutdown. This is done using iscsit_start_kthreads() -> kthread_run() to start new kthreads from within iscsi_post_login_handler(), and invoking kthread_stop() from existing iscsit_close_connection() code. Also, convert iscsit_logout_post_handler_closesession() code to use cmpxchg when determing when iscsit_cause_connection_reinstatement() needs to sleep waiting for completion. Reported-by: Sagi Grimberg <sagig@mellanox.com> Tested-by: Sagi Grimberg <sagig@mellanox.com> Cc: Slava Shwartsman <valyushash@gmail.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> [ luis: backported to 3.16: - file rename: include/target/iscsi/iscsi_target_core.h -> drivers/target/iscsi/iscsi_target_core.h - adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Manish Badarkhe authored
commit a57069e3 upstream. As davinci card gets registered using 'devm_' api there is no need to unregister the card in 'remove' function. Hence drop the 'remove' function. Fixes: ee2f615d (ASoC: davinci-evm: Add device tree binding) Signed-off-by: Manish Badarkhe <manishvb@ti.com> Signed-off-by: Jyri Sarha <jsarha@ti.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-
Charles Keepax authored
commit 4e330ae4 upstream. There are two PMICs on Cragganmore, currently one dynamically assign its IRQ base and the other uses a fixed base. It is possible for the statically assigned PMIC to fail if its IRQ is taken by the dynamically assigned one. Fix this by statically assigning both the IRQ bases. Signed-off-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com> Signed-off-by: Kukjin Kim <kgene@kernel.org> Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
-