- 16 Dec, 2015 16 commits
-
-
James Hogan authored
commit 002374f3 upstream. ASID restoration on guest resume should determine the guest execution mode based on the guest Status register rather than bit 30 of the guest PC. Fix the two places in locore.S that do this, loading the guest status from the cop0 area. Note, this assembly is specific to the trap & emulate implementation of KVM, so it doesn't need to check the supervisor bit as that mode is not implemented in the guest. Fixes: b680f70f ("KVM/MIPS32: Entry point for trampolining to...") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Gleb Natapov <gleb@kernel.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> [ luis: backported to 3.16: - file rename: locore.S -> kvm_locore.S ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Li Jun authored
commit 251b3c8b upstream. Since the ci->role will be set after the host role start is complete, there will be nobody cared irq during start host if usb irq enabled. This error can be reproduced on i.mx6 sololite EVK board by: 1. disable otg id irq(IDIE) and disable all real otg properties of usbotg1 in dts. 2. boot up the board with ID cable and usb device connected. 3. echo gadget > /sys/kernel/debug/ci_hdrc.0/role 4. echo host > /sys/kernel/debug/ci_hdrc.0/role 5. irq 212: nobody cared. Signed-off-by: Li Jun <jun.li@freescale.com> Signed-off-by: Peter Chen <peter.chen@freescale.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Junichi Nomura authored
commit 5bbbfdf6 upstream. dm-mpath retries ioctl, when no path is readily available and the device is configured to queue I/O in such a case. If you want to stop the retry before multipathd decides to turn off queueing mode, you could send signal for the process to exit from the loop. However the check of fatal signal has not carried along when commit 6c182cd8 ("dm mpath: fix ioctl deadlock when no paths") moved the loop from dm-mpath to dm core. As a result, we can't terminate such a process in the retry loop. Easy reproducer of the situation is: # dmsetup create mp --table '0 1024 multipath 0 0 0 0' # dmsetup message mp 0 'queue_if_no_path' # sg_inq /dev/mapper/mp then you should be able to terminate sg_inq by pressing Ctrl+C. Fixes: 6c182cd8 ("dm mpath: fix ioctl deadlock when no paths") Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com> Cc: Hannes Reinecke <hare@suse.de> Cc: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> [ kamal: backport to 3.13-stable: context ] Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Ben McCauley authored
commit b9e51b2b upstream. In some SoCs, dwc3 is implemented as a USB2.0 only core, meaning that it can't ever achieve SuperSpeed. Currect driver always sets gadget.max_speed to USB_SPEED_SUPER unconditionally. This can causes issues to some Host stacks where the host will issue a GetBOS() request and we will reply with a BOS containing Superspeed Capability Descriptor. At least Windows seems to be upset by this fact and prints a warning that we should connect $this device to another port. [ balbi@ti.com : rewrote entire commit, including source code comment to make a lot clearer what the problem is ] Signed-off-by: Ben McCauley <ben.mccauley@garmin.com> Signed-off-by: Felipe Balbi <balbi@ti.com> [ luis: backported to 3.16: - used dev_vdbg() instead of dwc3_trace() ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Johannes Berg authored
commit c2e703a5 upstream. When using call_rcu(), the called function may be delayed quite significantly, and without a matching rcu_barrier() there's no way to be sure it has finished. Therefore, global state that could be gone/freed/reused should never be touched in the callback. Fix this in mesh by moving the atomic_dec() into the caller; that's not really a problem since we already unlinked the path and it will be destroyed anyway. This fixes a crash Jouni observed when running certain tests in a certain order, in which the mesh interface was torn down, the memory reused for a function pointer (work struct) and running that then crashed since the pointer had been decremented by 1, resulting in an invalid instruction byte stream. Fixes: eb2b9311 ("mac80211: mesh path table implementation") Reported-by: Jouni Malinen <j@w1.fi> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Sachin Pandhare authored
commit e9f96bc5 upstream. From datasheet: R17408 (4400h) HPF_C_1 R17409 (4401h) HPF_C_0 17048 -> 17408 (0x4400) 17049 -> 17409 (0x4401) Signed-off-by: Sachin Pandhare <sachinpandhare@gmail.com> Acked-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
David Woodhouse authored
commit 1bcb49e6 upstream. The Honeywell HGI80 is a wireless interface to the evohome connected thermostat. It uses a TI 3410 USB-serial port. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Aleksander Morgado authored
commit e07af133 upstream. Also known as Verizon U620L. The device is modeswitched from 1410:9020 to 1410:9022 by selecting the 4th USB configuration: $ sudo usb_modeswitch –v 0x1410 –p 0x9020 –u 4 This configuration provides a ECM interface as well as TTYs ('Enterprise Mode' according to the U620 Linux integration guide). Signed-off-by: Aleksander Morgado <aleksander@aleksander.es> Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Clemens Ladisch authored
commit a91e627e upstream. One of the many faults of the QinHeng CH345 USB MIDI interface chip is that it does not handle received SysEx messages correctly -- every second event packet has a wrong code index number, which is the one from the last seen message, instead of 4. For example, the two messages "FE F0 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E F7" result in the following event packets: correct: CH345: 0F FE 00 00 0F FE 00 00 04 F0 01 02 04 F0 01 02 04 03 04 05 0F 03 04 05 04 06 07 08 04 06 07 08 04 09 0A 0B 0F 09 0A 0B 04 0C 0D 0E 04 0C 0D 0E 05 F7 00 00 05 F7 00 00 A class-compliant driver must interpret an event packet with CIN 15 as having a single data byte, so the other two bytes would be ignored. The message received by the host would then be missing two bytes out of six; in this example, "F0 01 02 03 06 07 08 09 0C 0D 0E F7". These corrupted SysEx event packages contain only data bytes, while the CH345 uses event packets with a correct CIN value only for messages with a status byte, so it is possible to distinguish between these two cases by checking for the presence of this status byte. (Other bugs in the CH345's input handling, such as the corruption resulting from running status, cannot be worked around.) Signed-off-by: Clemens Ladisch <clemens@ladisch.de> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Clemens Ladisch authored
commit 1ca8b201 upstream. The CH345 USB MIDI chip has two output ports. However, they are multiplexed through one pin, and the number of ports cannot be reduced even for hardware that implements only one connector, so for those devices, data sent to either port ends up on the same hardware output. This becomes a problem when both ports are used at the same time, as longer MIDI commands (such as SysEx messages) are likely to be interrupted by messages from the other port, and thus to get lost. It would not be possible for the driver to detect how many ports the device actually has, except that in practice, _all_ devices built with the CH345 have only one port. So we can just ignore the device's descriptors, and hardcode one output port. Signed-off-by: Clemens Ladisch <clemens@ladisch.de> Signed-off-by: Takashi Iwai <tiwai@suse.de> [ luis: backported to 3.16: adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Clemens Ladisch authored
commit 98d362be upstream. Signed-off-by: Clemens Ladisch <clemens@ladisch.de> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Dave Hansen authored
commit ab6b5294 upstream. (This should have gone to LKML originally. Sorry for the extra noise, folks on the cc.) Background: Signal frames on x86 have two formats: 1. For 32-bit executables (whether on a real 32-bit kernel or under 32-bit emulation on a 64-bit kernel) we have a 'fpregset_t' that includes the "FSAVE" registers. 2. For 64-bit executables (on 64-bit kernels obviously), the 'fpregset_t' is smaller and does not contain the "FSAVE" state. When creating the signal frame, we have to be aware of whether we are running a 32 or 64-bit executable so we create the correct format signal frame. Problem: save_xstate_epilog() uses 'fx_sw_reserved_ia32' whenever it is called for a 32-bit executable. This is for real 32-bit and ia32 emulation. But, fpu__init_prepare_fx_sw_frame() only initializes 'fx_sw_reserved_ia32' when emulation is enabled, *NOT* for real 32-bit kernels. This leads to really wierd situations where 32-bit programs lose their extended state when returning from a signal handler. The kernel copies the uninitialized (zero) 'fx_sw_reserved_ia32' out to userspace in save_xstate_epilog(). But when returning from the signal, the kernel errors out in check_for_xstate() when it does not see FP_XSTATE_MAGIC1 present (because it was zeroed). This leads to the FPU/XSAVE state being initialized. For MPX, this leads to the most permissive state and means we silently lose bounds violations. I think this would also mean that we could lose *ANY* FPU/SSE/AVX state. I'm not sure why no one has spotted this bug. I believe this was broken by: 72a671ce ("x86, fpu: Unify signal handling code paths for x86 and x86_64 kernels") way back in 2012. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dave@sr71.net Cc: fenghua.yu@intel.com Cc: yu-cheng.yu@intel.com Link: http://lkml.kernel.org/r/20151111002354.A0799571@viggo.jf.intel.comSigned-off-by: Ingo Molnar <mingo@kernel.org> [ luis: backported to 3.16: - file and function rename: * arch/x86/kernel/fpu/signal.c -> arch/x86/kernel/xsave.c * fpu__init_prepare_fx_sw_frame() -> prepare_fx_sw_frame() - use 'i387_fsave_struct' instead of 'fregs_state' - adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Lars-Peter Clausen authored
commit 785171fd upstream. While the datasheet for the AD7785 lists 0xXB as the product ID the actual product ID is 0xX3. Fix the product ID otherwise the driver will reject the device due to non matching IDs. Fixes: e786cc26 ("staging:iio:ad7793: Implement stricter id checking") Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Jonathan Cameron <jic23@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Lars-Peter Clausen authored
commit 5dcbe97b upstream. The ad5629/ad5669 are the I2C variant of the ad5628/ad5668, which has a SPI interface. They are mostly identical with the exception that the shift factor is different. Currently the driver does not take care of this difference which leads to incorrect DAC output values. Fix this by introducing a custom channel spec for the ad5629/ad5669 with the correct shift factor. Fixes: commit 6a17a076 ("iio:dac:ad5064: Add support for the ad5629r and ad5669r") Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Jonathan Cameron <jic23@kernel.org> [ kamal: backport to 3.13-stable: context ] Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Michael Hennerich authored
commit 03fe472e upstream. i2c_master_send() returns the number of bytes transferred on success while the ad5064 driver expects that the write() callback returns 0 on success. Fix that by translating any non negative return value of i2c_master_send() to 0. Fixes: commit 6a17a076 ("iio:dac:ad5064: Add support for the ad5629r and ad5669r") Signed-off-by: Michael Hennerich <michael.hennerich@analog.com> Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Jonathan Cameron <jic23@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Vladimir Zapolskiy authored
commit 01bb70ae upstream. If common clock framework is configured, the driver generates a warning, which is fixed by this change: root@devkit3250:~# cat /sys/bus/iio/devices/iio\:device0/in_voltage0_raw ------------[ cut here ]------------ WARNING: CPU: 0 PID: 724 at drivers/clk/clk.c:727 clk_core_enable+0x2c/0xa4() Modules linked in: sc16is7xx snd_soc_uda1380 CPU: 0 PID: 724 Comm: cat Not tainted 4.3.0-rc2+ #198 Hardware name: LPC32XX SoC (Flattened Device Tree) Backtrace: [<>] (dump_backtrace) from [<>] (show_stack+0x18/0x1c) [<>] (show_stack) from [<>] (dump_stack+0x20/0x28) [<>] (dump_stack) from [<>] (warn_slowpath_common+0x90/0xb8) [<>] (warn_slowpath_common) from [<>] (warn_slowpath_null+0x24/0x2c) [<>] (warn_slowpath_null) from [<>] (clk_core_enable+0x2c/0xa4) [<>] (clk_core_enable) from [<>] (clk_enable+0x24/0x38) [<>] (clk_enable) from [<>] (lpc32xx_read_raw+0x38/0x80) [<>] (lpc32xx_read_raw) from [<>] (iio_read_channel_info+0x70/0x94) [<>] (iio_read_channel_info) from [<>] (dev_attr_show+0x28/0x4c) [<>] (dev_attr_show) from [<>] (sysfs_kf_seq_show+0x8c/0xf0) [<>] (sysfs_kf_seq_show) from [<>] (kernfs_seq_show+0x2c/0x30) [<>] (kernfs_seq_show) from [<>] (seq_read+0x1c8/0x440) [<>] (seq_read) from [<>] (kernfs_fop_read+0x38/0x170) [<>] (kernfs_fop_read) from [<>] (do_readv_writev+0x16c/0x238) [<>] (do_readv_writev) from [<>] (vfs_readv+0x50/0x58) [<>] (vfs_readv) from [<>] (default_file_splice_read+0x1a4/0x308) [<>] (default_file_splice_read) from [<>] (do_splice_to+0x78/0x84) [<>] (do_splice_to) from [<>] (splice_direct_to_actor+0xc8/0x1cc) [<>] (splice_direct_to_actor) from [<>] (do_splice_direct+0xa0/0xb8) [<>] (do_splice_direct) from [<>] (do_sendfile+0x1a8/0x30c) [<>] (do_sendfile) from [<>] (SyS_sendfile64+0x104/0x10c) [<>] (SyS_sendfile64) from [<>] (ret_fast_syscall+0x0/0x38) Signed-off-by: Vladimir Zapolskiy <vz@mleia.com> Signed-off-by: Jonathan Cameron <jic23@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
- 11 Dec, 2015 3 commits
-
-
Kees Cook authored
commit 8779657d upstream. This changes the stack protector config option into a choice of "None", "Regular", and "Strong": CONFIG_CC_STACKPROTECTOR_NONE CONFIG_CC_STACKPROTECTOR_REGULAR CONFIG_CC_STACKPROTECTOR_STRONG "Regular" means the old CONFIG_CC_STACKPROTECTOR=y option. "Strong" is a new mode introduced by this patch. With "Strong" the kernel is built with -fstack-protector-strong (available in gcc 4.9 and later). This option increases the coverage of the stack protector without the heavy performance hit of -fstack-protector-all. For reference, the stack protector options available in gcc are: -fstack-protector-all: Adds the stack-canary saving prefix and stack-canary checking suffix to _all_ function entry and exit. Results in substantial use of stack space for saving the canary for deep stack users (e.g. historically xfs), and measurable (though shockingly still low) performance hit due to all the saving/checking. Really not suitable for sane systems, and was entirely removed as an option from the kernel many years ago. -fstack-protector: Adds the canary save/check to functions that define an 8 (--param=ssp-buffer-size=N, N=8 by default) or more byte local char array. Traditionally, stack overflows happened with string-based manipulations, so this was a way to find those functions. Very few total functions actually get the canary; no measurable performance or size overhead. -fstack-protector-strong Adds the canary for a wider set of functions, since it's not just those with strings that have ultimately been vulnerable to stack-busting. With this superset, more functions end up with a canary, but it still remains small compared to all functions with only a small change in performance. Based on the original design document, a function gets the canary when it contains any of: - local variable's address used as part of the right hand side of an assignment or function argument - local variable is an array (or union containing an array), regardless of array type or length - uses register local variables https://docs.google.com/a/google.com/document/d/1xXBH6rRZue4f296vGt9YQcuLVQHeE516stHwt8M9xyU Find below a comparison of "size" and "objdump" output when built with gcc-4.9 in three configurations: - defconfig 11430641 kernel text size 36110 function bodies - defconfig + CONFIG_CC_STACKPROTECTOR_REGULAR 11468490 kernel text size (+0.33%) 1015 of 36110 functions are stack-protected (2.81%) - defconfig + CONFIG_CC_STACKPROTECTOR_STRONG via this patch 11692790 kernel text size (+2.24%) 7401 of 36110 functions are stack-protected (20.5%) With -strong, ARM's compressed boot code now triggers stack protection, so a static guard was added. Since this is only used during decompression and was never used before, the exposure here is very small. Once it switches to the full kernel, the stack guard is back to normal. Chrome OS has been using -fstack-protector-strong for its kernel builds for the last 8 months with no problems. Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Michal Marek <mmarek@suse.cz> Cc: Russell King <linux@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: James Hogan <james.hogan@imgtec.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Shawn Guo <shawn.guo@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@linux-mips.org Cc: linux-arch@vger.kernel.org Link: http://lkml.kernel.org/r/1387481759-14535-3-git-send-email-keescook@chromium.org [ Improved the changelog and descriptions some more. ] Signed-off-by: Ingo Molnar <mingo@kernel.org> [ kamal: 3.13-stable: need these arch/arm/boot/compressed/misc.c __stack_chk canary functions, even for just the old CONFIG_CC_STACKPROTECTOR ] Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Kees Cook authored
commit 19952a92 upstream. Instead of duplicating the CC_STACKPROTECTOR Kconfig and Makefile logic in each architecture, switch to using HAVE_CC_STACKPROTECTOR and keep everything in one place. This retains the x86-specific bug verification scripts. Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Michal Marek <mmarek@suse.cz> Cc: Russell King <linux@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: James Hogan <james.hogan@imgtec.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Shawn Guo <shawn.guo@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@linux-mips.org Cc: linux-arch@vger.kernel.org Link: http://lkml.kernel.org/r/1387481759-14535-2-git-send-email-keescook@chromium.orgSigned-off-by: Ingo Molnar <mingo@kernel.org> [ kamal: 3.13-stable prereq for 8779657d stackprotector: Introduce CONFIG_CC_STACKPROTECTOR_STRONG ] Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Kosuke Tatsukawa authored
commit e81107d4 upstream. My colleague ran into a program stall on a x86_64 server, where n_tty_read() was waiting for data even if there was data in the buffer in the pty. kernel stack for the stuck process looks like below. #0 [ffff88303d107b58] __schedule at ffffffff815c4b20 #1 [ffff88303d107bd0] schedule at ffffffff815c513e #2 [ffff88303d107bf0] schedule_timeout at ffffffff815c7818 #3 [ffff88303d107ca0] wait_woken at ffffffff81096bd2 #4 [ffff88303d107ce0] n_tty_read at ffffffff8136fa23 #5 [ffff88303d107dd0] tty_read at ffffffff81368013 #6 [ffff88303d107e20] __vfs_read at ffffffff811a3704 #7 [ffff88303d107ec0] vfs_read at ffffffff811a3a57 #8 [ffff88303d107f00] sys_read at ffffffff811a4306 #9 [ffff88303d107f50] entry_SYSCALL_64_fastpath at ffffffff815c86d7 There seems to be two problems causing this issue. First, in drivers/tty/n_tty.c, __receive_buf() stores the data and updates ldata->commit_head using smp_store_release() and then checks the wait queue using waitqueue_active(). However, since there is no memory barrier, __receive_buf() could return without calling wake_up_interactive_poll(), and at the same time, n_tty_read() could start to wait in wait_woken() as in the following chart. __receive_buf() n_tty_read() ------------------------------------------------------------------------ if (waitqueue_active(&tty->read_wait)) /* Memory operations issued after the RELEASE may be completed before the RELEASE operation has completed */ add_wait_queue(&tty->read_wait, &wait); ... if (!input_available_p(tty, 0)) { smp_store_release(&ldata->commit_head, ldata->read_head); ... timeout = wait_woken(&wait, TASK_INTERRUPTIBLE, timeout); ------------------------------------------------------------------------ The second problem is that n_tty_read() also lacks a memory barrier call and could also cause __receive_buf() to return without calling wake_up_interactive_poll(), and n_tty_read() to wait in wait_woken() as in the chart below. __receive_buf() n_tty_read() ------------------------------------------------------------------------ spin_lock_irqsave(&q->lock, flags); /* from add_wait_queue() */ ... if (!input_available_p(tty, 0)) { /* Memory operations issued after the RELEASE may be completed before the RELEASE operation has completed */ smp_store_release(&ldata->commit_head, ldata->read_head); if (waitqueue_active(&tty->read_wait)) __add_wait_queue(q, wait); spin_unlock_irqrestore(&q->lock,flags); /* from add_wait_queue() */ ... timeout = wait_woken(&wait, TASK_INTERRUPTIBLE, timeout); ------------------------------------------------------------------------ There are also other places in drivers/tty/n_tty.c which have similar calls to waitqueue_active(), so instead of adding many memory barrier calls, this patch simply removes the call to waitqueue_active(), leaving just wake_up*() behind. This fixes both problems because, even though the memory access before or after the spinlocks in both wake_up*() and add_wait_queue() can sneak into the critical section, it cannot go past it and the critical section assures that they will be serialized (please see "INTER-CPU ACQUIRING BARRIER EFFECTS" in Documentation/memory-barriers.txt for a better explanation). Moreover, the resulting code is much simpler. Latency measurement using a ping-pong test over a pty doesn't show any visible performance drop. Signed-off-by: Kosuke Tatsukawa <tatsu@ab.jp.nec.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> [jsalisbury: Backported to 3.13.y: - Use wake_up_interruptible(), not wake_up_interruptible_poll() - There are only two spurious uses of waitqueue_active() to remove] BugLink: http://bugs.launchpad.net/bugs/1512815Signed-off-by: Joseph Salisbury <joseph.salisbury@canonical.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
- 07 Dec, 2015 4 commits
-
-
Kamal Mostafa authored
Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Filipe Manana authored
commit 0305cd5f upstream. When truncating a file to a smaller size which consists of an inline extent that is compressed, we did not discard (or made unusable) the data between the new file size and the old file size, wasting metadata space and allowing for the truncated data to be leaked and the data corruption/loss mentioned below. We were also not correctly decrementing the number of bytes used by the inode, we were setting it to zero, giving a wrong report for callers of the stat(2) syscall. The fsck tool also reported an error about a mismatch between the nbytes of the file versus the real space used by the file. Now because we weren't discarding the truncated region of the file, it was possible for a caller of the clone ioctl to actually read the data that was truncated, allowing for a security breach without requiring root access to the system, using only standard filesystem operations. The scenario is the following: 1) User A creates a file which consists of an inline and compressed extent with a size of 2000 bytes - the file is not accessible to any other users (no read, write or execution permission for anyone else); 2) The user truncates the file to a size of 1000 bytes; 3) User A makes the file world readable; 4) User B creates a file consisting of an inline extent of 2000 bytes; 5) User B issues a clone operation from user A's file into its own file (using a length argument of 0, clone the whole range); 6) User B now gets to see the 1000 bytes that user A truncated from its file before it made its file world readbale. User B also lost the bytes in the range [1000, 2000[ bytes from its own file, but that might be ok if his/her intention was reading stale data from user A that was never supposed to be public. Note that this contrasts with the case where we truncate a file from 2000 bytes to 1000 bytes and then truncate it back from 1000 to 2000 bytes. In this case reading any byte from the range [1000, 2000[ will return a value of 0x00, instead of the original data. This problem exists since the clone ioctl was added and happens both with and without my recent data loss and file corruption fixes for the clone ioctl (patch "Btrfs: fix file corruption and data loss after cloning inline extents"). So fix this by truncating the compressed inline extents as we do for the non-compressed case, which involves decompressing, if the data isn't already in the page cache, compressing the truncated version of the extent, writing the compressed content into the inline extent and then truncate it. The following test case for fstests reproduces the problem. In order for the test to pass both this fix and my previous fix for the clone ioctl that forbids cloning a smaller inline extent into a larger one, which is titled "Btrfs: fix file corruption and data loss after cloning inline extents", are needed. Without that other fix the test fails in a different way that does not leak the truncated data, instead part of destination file gets replaced with zeroes (because the destination file has a larger inline extent than the source). seq=`basename $0` seqres=$RESULT_DIR/$seq echo "QA output created by $seq" tmp=/tmp/$$ status=1 # failure is the default! trap "_cleanup; exit \$status" 0 1 2 3 15 _cleanup() { rm -f $tmp.* } # get standard environment, filters and checks . ./common/rc . ./common/filter # real QA test starts here _need_to_be_root _supported_fs btrfs _supported_os Linux _require_scratch _require_cloner rm -f $seqres.full _scratch_mkfs >>$seqres.full 2>&1 _scratch_mount "-o compress" # Create our test files. File foo is going to be the source of a clone operation # and consists of a single inline extent with an uncompressed size of 512 bytes, # while file bar consists of a single inline extent with an uncompressed size of # 256 bytes. For our test's purpose, it's important that file bar has an inline # extent with a size smaller than foo's inline extent. $XFS_IO_PROG -f -c "pwrite -S 0xa1 0 128" \ -c "pwrite -S 0x2a 128 384" \ $SCRATCH_MNT/foo | _filter_xfs_io $XFS_IO_PROG -f -c "pwrite -S 0xbb 0 256" $SCRATCH_MNT/bar | _filter_xfs_io # Now durably persist all metadata and data. We do this to make sure that we get # on disk an inline extent with a size of 512 bytes for file foo. sync # Now truncate our file foo to a smaller size. Because it consists of a # compressed and inline extent, btrfs did not shrink the inline extent to the # new size (if the extent was not compressed, btrfs would shrink it to 128 # bytes), it only updates the inode's i_size to 128 bytes. $XFS_IO_PROG -c "truncate 128" $SCRATCH_MNT/foo # Now clone foo's inline extent into bar. # This clone operation should fail with errno EOPNOTSUPP because the source # file consists only of an inline extent and the file's size is smaller than # the inline extent of the destination (128 bytes < 256 bytes). However the # clone ioctl was not prepared to deal with a file that has a size smaller # than the size of its inline extent (something that happens only for compressed # inline extents), resulting in copying the full inline extent from the source # file into the destination file. # # Note that btrfs' clone operation for inline extents consists of removing the # inline extent from the destination inode and copy the inline extent from the # source inode into the destination inode, meaning that if the destination # inode's inline extent is larger (N bytes) than the source inode's inline # extent (M bytes), some bytes (N - M bytes) will be lost from the destination # file. Btrfs could copy the source inline extent's data into the destination's # inline extent so that we would not lose any data, but that's currently not # done due to the complexity that would be needed to deal with such cases # (specially when one or both extents are compressed), returning EOPNOTSUPP, as # it's normally not a very common case to clone very small files (only case # where we get inline extents) and copying inline extents does not save any # space (unlike for normal, non-inlined extents). $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/foo $SCRATCH_MNT/bar # Now because the above clone operation used to succeed, and due to foo's inline # extent not being shinked by the truncate operation, our file bar got the whole # inline extent copied from foo, making us lose the last 128 bytes from bar # which got replaced by the bytes in range [128, 256[ from foo before foo was # truncated - in other words, data loss from bar and being able to read old and # stale data from foo that should not be possible to read anymore through normal # filesystem operations. Contrast with the case where we truncate a file from a # size N to a smaller size M, truncate it back to size N and then read the range # [M, N[, we should always get the value 0x00 for all the bytes in that range. # We expected the clone operation to fail with errno EOPNOTSUPP and therefore # not modify our file's bar data/metadata. So its content should be 256 bytes # long with all bytes having the value 0xbb. # # Without the btrfs bug fix, the clone operation succeeded and resulted in # leaking truncated data from foo, the bytes that belonged to its range # [128, 256[, and losing data from bar in that same range. So reading the # file gave us the following content: # # 0000000 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 # * # 0000200 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a # * # 0000400 echo "File bar's content after the clone operation:" od -t x1 $SCRATCH_MNT/bar # Also because the foo's inline extent was not shrunk by the truncate # operation, btrfs' fsck, which is run by the fstests framework everytime a # test completes, failed reporting the following error: # # root 5 inode 257 errors 400, nbytes wrong status=0 exit Signed-off-by: Filipe Manana <fdmanana@suse.com> [ kamal: backport 3.13-stable: root->ref_cows vs BTRFS_ROOT_REF_COWS; context ] Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Chris Mason authored
commit 514ac8ad upstream. If we truncate an uncompressed inline item, ram_bytes isn't updated to reflect the new size. The fixe uses the size directly from the item header when reading uncompressed inlines, and also fixes truncate to update the size as it goes. Reported-by: Jens Axboe <axboe@fb.com> Signed-off-by: Chris Mason <clm@fb.com> [ kamal: backport to 3.13-stable: applied to all _inline_len callers ] Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Filipe Manana authored
commit 8039d87d upstream. Currently the clone ioctl allows to clone an inline extent from one file to another that already has other (non-inlined) extents. This is a problem because btrfs is not designed to deal with files having inline and regular extents, if a file has an inline extent then it must be the only extent in the file and must start at file offset 0. Having a file with an inline extent followed by regular extents results in EIO errors when doing reads or writes against the first 4K of the file. Also, the clone ioctl allows one to lose data if the source file consists of a single inline extent, with a size of N bytes, and the destination file consists of a single inline extent with a size of M bytes, where we have M > N. In this case the clone operation removes the inline extent from the destination file and then copies the inline extent from the source file into the destination file - we lose the M - N bytes from the destination file, a read operation will get the value 0x00 for any bytes in the the range [N, M] (the destination inode's i_size remained as M, that's why we can read past N bytes). So fix this by not allowing such destructive operations to happen and return errno EOPNOTSUPP to user space. Currently the fstest btrfs/035 tests the data loss case but it totally ignores this - i.e. expects the operation to succeed and does not check the we got data loss. The following test case for fstests exercises all these cases that result in file corruption and data loss: seq=`basename $0` seqres=$RESULT_DIR/$seq echo "QA output created by $seq" tmp=/tmp/$$ status=1 # failure is the default! trap "_cleanup; exit \$status" 0 1 2 3 15 _cleanup() { rm -f $tmp.* } # get standard environment, filters and checks . ./common/rc . ./common/filter # real QA test starts here _need_to_be_root _supported_fs btrfs _supported_os Linux _require_scratch _require_cloner _require_btrfs_fs_feature "no_holes" _require_btrfs_mkfs_feature "no-holes" rm -f $seqres.full test_cloning_inline_extents() { local mkfs_opts=$1 local mount_opts=$2 _scratch_mkfs $mkfs_opts >>$seqres.full 2>&1 _scratch_mount $mount_opts # File bar, the source for all the following clone operations, consists # of a single inline extent (50 bytes). $XFS_IO_PROG -f -c "pwrite -S 0xbb 0 50" $SCRATCH_MNT/bar \ | _filter_xfs_io # Test cloning into a file with an extent (non-inlined) where the # destination offset overlaps that extent. It should not be possible to # clone the inline extent from file bar into this file. $XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 16K" $SCRATCH_MNT/foo \ | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo # Doing IO against any range in the first 4K of the file should work. # Due to a past clone ioctl bug which allowed cloning the inline extent, # these operations resulted in EIO errors. echo "File foo data after clone operation:" # All bytes should have the value 0xaa (clone operation failed and did # not modify our file). od -t x1 $SCRATCH_MNT/foo $XFS_IO_PROG -c "pwrite -S 0xcc 0 100" $SCRATCH_MNT/foo | _filter_xfs_io # Test cloning the inline extent against a file which has a hole in its # first 4K followed by a non-inlined extent. It should not be possible # as well to clone the inline extent from file bar into this file. $XFS_IO_PROG -f -c "pwrite -S 0xdd 4K 12K" $SCRATCH_MNT/foo2 \ | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo2 # Doing IO against any range in the first 4K of the file should work. # Due to a past clone ioctl bug which allowed cloning the inline extent, # these operations resulted in EIO errors. echo "File foo2 data after clone operation:" # All bytes should have the value 0x00 (clone operation failed and did # not modify our file). od -t x1 $SCRATCH_MNT/foo2 $XFS_IO_PROG -c "pwrite -S 0xee 0 90" $SCRATCH_MNT/foo2 | _filter_xfs_io # Test cloning the inline extent against a file which has a size of zero # but has a prealloc extent. It should not be possible as well to clone # the inline extent from file bar into this file. $XFS_IO_PROG -f -c "falloc -k 0 1M" $SCRATCH_MNT/foo3 | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo3 # Doing IO against any range in the first 4K of the file should work. # Due to a past clone ioctl bug which allowed cloning the inline extent, # these operations resulted in EIO errors. echo "First 50 bytes of foo3 after clone operation:" # Should not be able to read any bytes, file has 0 bytes i_size (the # clone operation failed and did not modify our file). od -t x1 $SCRATCH_MNT/foo3 $XFS_IO_PROG -c "pwrite -S 0xff 0 90" $SCRATCH_MNT/foo3 | _filter_xfs_io # Test cloning the inline extent against a file which consists of a # single inline extent that has a size not greater than the size of # bar's inline extent (40 < 50). # It should be possible to do the extent cloning from bar to this file. $XFS_IO_PROG -f -c "pwrite -S 0x01 0 40" $SCRATCH_MNT/foo4 \ | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo4 # Doing IO against any range in the first 4K of the file should work. echo "File foo4 data after clone operation:" # Must match file bar's content. od -t x1 $SCRATCH_MNT/foo4 $XFS_IO_PROG -c "pwrite -S 0x02 0 90" $SCRATCH_MNT/foo4 | _filter_xfs_io # Test cloning the inline extent against a file which consists of a # single inline extent that has a size greater than the size of bar's # inline extent (60 > 50). # It should not be possible to clone the inline extent from file bar # into this file. $XFS_IO_PROG -f -c "pwrite -S 0x03 0 60" $SCRATCH_MNT/foo5 \ | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo5 # Reading the file should not fail. echo "File foo5 data after clone operation:" # Must have a size of 60 bytes, with all bytes having a value of 0x03 # (the clone operation failed and did not modify our file). od -t x1 $SCRATCH_MNT/foo5 # Test cloning the inline extent against a file which has no extents but # has a size greater than bar's inline extent (16K > 50). # It should not be possible to clone the inline extent from file bar # into this file. $XFS_IO_PROG -f -c "truncate 16K" $SCRATCH_MNT/foo6 | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo6 # Reading the file should not fail. echo "File foo6 data after clone operation:" # Must have a size of 16K, with all bytes having a value of 0x00 (the # clone operation failed and did not modify our file). od -t x1 $SCRATCH_MNT/foo6 # Test cloning the inline extent against a file which has no extents but # has a size not greater than bar's inline extent (30 < 50). # It should be possible to clone the inline extent from file bar into # this file. $XFS_IO_PROG -f -c "truncate 30" $SCRATCH_MNT/foo7 | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo7 # Reading the file should not fail. echo "File foo7 data after clone operation:" # Must have a size of 50 bytes, with all bytes having a value of 0xbb. od -t x1 $SCRATCH_MNT/foo7 # Test cloning the inline extent against a file which has a size not # greater than the size of bar's inline extent (20 < 50) but has # a prealloc extent that goes beyond the file's size. It should not be # possible to clone the inline extent from bar into this file. $XFS_IO_PROG -f -c "falloc -k 0 1M" \ -c "pwrite -S 0x88 0 20" \ $SCRATCH_MNT/foo8 | _filter_xfs_io $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo8 echo "File foo8 data after clone operation:" # Must have a size of 20 bytes, with all bytes having a value of 0x88 # (the clone operation did not modify our file). od -t x1 $SCRATCH_MNT/foo8 _scratch_unmount } echo -e "\nTesting without compression and without the no-holes feature...\n" test_cloning_inline_extents echo -e "\nTesting with compression and without the no-holes feature...\n" test_cloning_inline_extents "" "-o compress" echo -e "\nTesting without compression and with the no-holes feature...\n" test_cloning_inline_extents "-O no-holes" "" echo -e "\nTesting with compression and with the no-holes feature...\n" test_cloning_inline_extents "-O no-holes" "-o compress" status=0 exit Signed-off-by: Filipe Manana <fdmanana@suse.com> [ kamal: backport to 3.13-stable: context ] Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
- 04 Dec, 2015 2 commits
-
-
Junichi Nomura authored
commit 2a708cff upstream. __dm_destroy() takes io_barrier SRCU lock (dm_get_live_table) and suspend_lock in reverse order. Doing so can cause AB-BA deadlock: __dm_destroy dm_swap_table --------------------------------------------------- mutex_lock(suspend_lock) dm_get_live_table() srcu_read_lock(io_barrier) dm_sync_table() synchronize_srcu(io_barrier) .. waiting for dm_put_live_table() mutex_lock(suspend_lock) .. waiting for suspend_lock Fix this by taking the locks in proper order. Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com> Fixes: ab7c7bb6 ("dm: hold suspend_lock while suspending device during device deletion") Acked-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> [ luis: backported to 3.16: adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Kamal Mostafa authored
This reverts commit a1e5fc68 (which is a botched cherry-pick), to be replaced with a fixed backport. Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
- 02 Dec, 2015 15 commits
-
-
Christophe JAILLET authored
commit eb8ed1eb upstream. Reference to the 'np' node is dropped before dereferencing the 'sizep' and 'basep' pointers, which could by then point to junk if the node has been freed. Refactor code to call 'of_node_put' later. Fixes: c5df3926 ("drivers/char/tpm: Add securityfs support for event log") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Acked-by: Peter Huewe <PeterHuewe@gmx.de> [ luis: backported to 3.16: adjusted context ] Signed-off-by: Luis Henriques <luis.henriques@canonical.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Florian Westphal authored
commit dbc3617f upstream. nfnetlink_bind request_module()s all the time as nfnetlink_get_subsys() shifts the argument by 8 to obtain the subsys id. So using type instead of type << 8 always returns NULL. Fixes: 03292745 ("netlink: add nlk->netlink_bind hook for module auto-loading") Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Lukas Wunner authored
commit 3c67d839 upstream. In its original version, drm_framebuffer_init() returned a negative int if drm_mode_object_get() failed (f453ba04, "DRM: add mode setting support"). This was accidentally disabled by commit 4b096ac1 ("drm: revamp locking around fb creation/destruction"). Thus, drm_framebuffer_init() pretends success if drm_mode_object_get() failed. Reinstate the original behaviour. Also fix erroneous kernel-doc of drm_mode_object_get(). Fixes: 4b096ac1 ("drm: revamp locking around fb creation/ destruction") Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Arnd Bergmann authored
commit 54c09889 upstream. The z2 machine calls pxa27x_set_pwrmode() in order to power off the machine, but this function gets discarded early at boot because it is marked __init, as pointed out by kbuild: WARNING: vmlinux.o(.text+0x145c4): Section mismatch in reference from the function z2_power_off() to the function .init.text:pxa27x_set_pwrmode() The function z2_power_off() references the function __init pxa27x_set_pwrmode(). This is often because z2_power_off lacks a __init annotation or the annotation of pxa27x_set_pwrmode is wrong. This removes the __init section modifier to fix rebooting and the build error. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: ba4a90a6 ("ARM: pxa/z2: fix building error of pxa27x_cpu_suspend() no longer available") Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Eric Dumazet authored
commit 161642e2 upstream. Recent TCP listener patches exposed a prior af_packet bug : match_fanout_group() blindly assumes it is always safe to cast sk to a packet socket to compare fanout with af_packet_priv But SYNACK packets can be sent while attached to request_sock, which are smaller than a "struct sock". We can read non existent memory and crash. Fixes: c0de08d0 ("af_packet: don't emit packet on orig fanout group") Fixes: ca6fb065 ("tcp: attach SYNACK messages to request sockets instead of listener") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Willem de Bruijn <willemb@google.com> Cc: Eric Leblond <eric@regit.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Johannes Berg authored
commit 8ec6d978 upstream. The ifmgd->ave_beacon_signal value cannot be taken as is for comparisons, it must be divided by since it's represented like that for better accuracy of the EWMA calculations. This would lead to invalid driver RSSI events. Fix the used value. Fixes: 615f7b9b ("mac80211: add driver RSSI threshold events") Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Jay Vosburgh authored
commit 40baec22 upstream. Since commit 7d5cd2ce529b, when bond_enslave fails on devices that are not ARPHRD_ETHER, if needed, it resets the bonding device back to ARPHRD_ETHER by calling ether_setup. Unfortunately, ether_setup clobbers dev->flags, clearing IFF_UP if the bond device is up, leaving it in a quasi-down state without having actually gone through dev_close. For bonding, if any periodic work queue items are active (miimon, arp_interval, etc), those will remain running, as they are stopped by bond_close. At this point, if the bonding module is unloaded or the bond is deleted, the system will panic when the work function is called. This panic is resolved by calling dev_close on the bond itself prior to calling ether_setup. Cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: Jay Vosburgh <jay.vosburgh@canonical.com> Fixes: 7d5cd2ce ("bonding: correctly handle bonding type change on enslave failure") Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Peter Feiner authored
commit 956959f6 upstream. The -i flag was incorrectly listed as a short flag for --no-inherit. It should have only been listed as a short flag for --input. This documentation error has existed since the --input flag was introduced in 6810fc91 (perf trace: Add option to analyze events in a file versus live). Signed-off-by: Peter Feiner <pfeiner@google.com> Cc: David Ahern <dsahern@gmail.com> Link: http://lkml.kernel.org/r/1446657706-14518-1-git-send-email-pfeiner@google.com Fixes: 6810fc91 ("perf trace: Add option to analyze events in a file versus live") Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Michal Kubeček authored
commit ebac62fe upstream. Both tunnel6_protocol and tunnel46_protocol share the same error handler, tunnel6_err(), which traverses through tunnel6_handlers list. For ipip6 tunnels, we need to traverse tunnel46_handlers as we do e.g. in tunnel46_rcv(). Current code can generate an ICMPv6 error message with an IPv4 packet embedded in it. Fixes: 73d605d1 ("[IPSEC]: changing API of xfrm6_tunnel_register") Signed-off-by: Michal Kubecek <mkubecek@suse.cz> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Ralf Baechle authored
commit f25319d2 upstream. Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Fixes: f24219b4 (cherry picked from commit f0a232cde7be18a207fd057dd79bbac8a0a45dec) Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Dan Carpenter authored
commit 1f35d04a upstream. The iomap[] array has PCIM_IOMAP_MAX (6) elements and not DEVICE_COUNT_RESOURCE (16). This bug was found using a static checker. It may be that the "if (!(mask & (1 << i)))" check means we never actually go past the end of the array in real life. Fixes: ec04b075 ('iomap: implement pcim_iounmap_regions()') Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Andy Shevchenko authored
commit 39416677 upstream. We replace __fls() by __ffs() since we have to find a *minimum* data width that satisfies both source and destination. While here, rename dwc_fast_fls() to dwc_fast_ffs() which it really is. Fixes: 4c2d56c5 (dw_dmac: introduce dwc_fast_fls()) Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Dan Carpenter authored
commit 1f9c6e1b upstream. There were several bugs here. 1) The done label was in the wrong place so we didn't copy any information out when there was no command given. 2) We were using PAGE_SIZE as the size of the buffer instead of "PAGE_SIZE - pos". 3) snprintf() returns the number of characters that would have been printed if there were enough space. If there was not enough space (and we had fixed the memory corruption bug #2) then it would result in an information leak when we do simple_read_from_buffer(). I've changed it to use scnprintf() instead. I also removed the initialization at the start of the function, because I thought it made the code a little more clear. Fixes: 5e6e3a92 ('wireless: mwifiex: initial commit for Marvell mwifiex driver') Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Amitkumar Karwar <akarwar@marvell.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Valentin Rothberg authored
commit 90adf98d upstream. Since commit 1c6c6952 ("genirq: Reject bogus threaded irq requests") threaded IRQs without a primary handler need to be requested with IRQF_ONESHOT, otherwise the request will fail. scripts/coccinelle/misc/irqf_oneshot.cocci detected this issue. Fixes: b5874f33 ("wm831x_power: Use genirq") Signed-off-by: Valentin Rothberg <valentinrothberg@gmail.com> Signed-off-by: Sebastian Reichel <sre@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Maciej W. Rozycki authored
commit b582ef5c upstream. Do not clobber the buffer space passed from `search_binary_handler' and originally preloaded by `prepare_binprm' with the executable's file header by overwriting it with its interpreter's file header. Instead keep the buffer space intact and directly use the data structure locally allocated for the interpreter's file header, fixing a bug introduced in 2.1.14 with loadable module support (linux-mips.org commit beb11695 [Import of Linux/MIPS 2.1.14], predating kernel.org repo's history). Adjust the amount of data read from the interpreter's file accordingly. This was not an issue before loadable module support, because back then `load_elf_binary' was executed only once for a given ELF executable, whether the function succeeded or failed. With loadable module support supported and enabled, upon a failure of `load_elf_binary' -- which may for example be caused by architecture code rejecting an executable due to a missing hardware feature requested in the file header -- a module load is attempted and then the function reexecuted by `search_binary_handler'. With the executable's file header replaced with its interpreter's file header the executable can then be erroneously accepted in this subsequent attempt. Signed-off-by: Maciej W. Rozycki <macro@imgtec.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-