- 13 Apr, 2012 11 commits
-
-
Jan Beulich authored
commit 9aaf440f upstream. This was lacking a comma between two supposed to be separate strings. Signed-off-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Michal Marek <mmarek@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Julian Anastasov authored
commit 3e80acd1 upstream. commit 64b3db22 (2.6.39), "Remove use of unreliable FADT revision field" causes regression for old P4 systems because now cst_control and other fields are not reset to 0. The effect is that acpi_processor_power_init will notice cst_control != 0 and a write to CST_CNT register is performed that should not happen. As result, the system oopses after the "No _CST, giving up" message, sometimes in acpi_ns_internalize_name, sometimes in acpi_ns_get_type, usually at random places. May be during migration to CPU 1 in acpi_processor_get_throttling. Every one of these settings help to avoid this problem: - acpi=off - processor.nocst=1 - maxcpus=1 The fix is to update acpi_gbl_FADT.header.length after the original value is used to check for old revisions. https://bugzilla.kernel.org/show_bug.cgi?id=42700 https://bugzilla.redhat.com/show_bug.cgi?id=727865Signed-off-by: Julian Anastasov <ja@ssi.bg> Acked-by: Bob Moore <robert.moore@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> Cc: Jonathan Nieder <jrnieder@gmail.com> Cc: Josh Boyer <jwboyer@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Yinghai Lu authored
commit 89e96ada upstream. During testing pci root bus removal, found some root bus bridge is not freed. If booting with pnpacpi=off, those hostbridge could be freed without problem. It turns out that some devices reference are not released during acpi_pnp_match. that match should not hold one device ref during every calling. Add pu_device calling before returning. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Andi Kleen authored
commit 2815ab92 upstream. On Intel CPUs the processor typically uses the highest frequency set by any logical CPU. When the system overheats Linux first forces the frequency to the lowest available one to lower the temperature. However this was done only per logical CPU, which means all logical CPUs in a package would need to go through this before the frequency is actually lowered. Worse this delay actually prevents real throttling, because the real throttle code only proceeds when the lowest frequency is already reached. So when a throttle event happens force the lowest frequency for all CPUs in the package where it happened. The per CPU state is now kept per package, not per logical CPU. An alternative would be to do it per cpufreq unit, but since we want to bring down the temperature of the complete chip it's better to do it for all. In principle it may even make sense to do it for all CPUs, but I kept it on the package for now. With this change the frequency is actually lowered, which in terms also allows real throttling to proceed. I also removed an unnecessary per cpu variable initialization. v2: Fix package mapping Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Brian Norris authored
commit b54f47c8 upstream. Using UBI on m25p80 can give messages like: UBI error: io_init: bad write buffer size 0 for 1 min. I/O unit We need to initialize writebufsize; I think "page_size" is the correct "bufsize", although I'm not sure. Comments? Signed-off-by: Brian Norris <computersforpeace@gmail.com> Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Artem Bityutskiy authored
commit fcc44a07 upstream. The writebufsize concept was introduce by commit "0e4ca7e5 mtd: add writebufsize field to mtd_info struct" and it represents the maximum amount of data the device writes to the media at a time. This is an important parameter for UBIFS which is used during recovery and which basically defines how big a corruption caused by a power cut can be. Set writebufsize to 4 because this drivers writes at max 4 bytes at a time. Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Artem Bityutskiy authored
commit b6043874 upstream. The writebufsize concept was introduce by commit "0e4ca7e5 mtd: add writebufsize field to mtd_info struct" and it represents the maximum amount of data the device writes to the media at a time. This is an important parameter for UBIFS which is used during recovery and which basically defines how big a corruption caused by a power cut can be. However, we forgot to set this parameter for block2mtd. Set it to PAGE_SIZE because this is actually the amount of data we write at a time. Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Acked-by: Joern Engel <joern@lazybastard.org> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Artem Bityutskiy authored
commit c4cc625e upstream. The writebufsize concept was introduce by commit "0e4ca7e5 mtd: add writebufsize field to mtd_info struct" and it represents the maximum amount of data the device writes to the media at a time. This is an important parameter for UBIFS which is used during recovery and which basically defines how big a corruption caused by a power cut can be. Set writebufsize to the flash page size because it is the maximum amount of data it writes at a time. Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rabin Vincent authored
[ Upstream commit 78fb72f7 ] Make CDC EEM recalculate the hard_mtu after adjusting the hard_header_len. Without this, usbnet adjusts the MTU down to 1494 bytes, and the host is unable to receive standard 1500-byte frames from the device. Tested with the Linux USB Ethernet gadget. Cc: Oliver Neukum <oliver@neukum.name> Signed-off-by: Rabin Vincent <rabin@rab.in> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
danborkmann@iogearbox.net authored
[ Upstream commit 81213b5e ] If both addresses equal, nothing needs to be done. If the device is down, then we simply copy the new address to dev->dev_addr. If the device is up, then we add another loopback device with the new address, and if that does not fail, we remove the loopback device with the old address. And only then, we update the dev->dev_addr. Signed-off-by: Daniel Borkmann <daniel.borkmann@tik.ee.ethz.ch> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
zhuangfeiran@ict.ac.cn authored
[ Upstream commit 1d24fb36 ] When K >= 0xFFFF0000, AND needs the two least significant bytes of K as its operand, but EMIT2() gives it the least significant byte of K and 0x2. EMIT() should be used here to replace EMIT2(). Signed-off-by: Feiran Zhuang <zhuangfeiran@ict.ac.cn> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 02 Apr, 2012 29 commits
-
-
Greg Kroah-Hartman authored
-
Matthew Garrett authored
commit c9651e70 upstream. Since 3.2.12 and 3.3, some systems are failing to boot with a BUG_ON. Some other systems using the pata_jmicron driver fail to boot because no disks are detected. Passing pcie_aspm=force on the kernel command line works around it. The cause: commit 4949be16 ("PCI: ignore pre-1.1 ASPM quirking when ASPM is disabled") changed the behaviour of pcie_aspm_sanity_check() to always return 0 if aspm is disabled, in order to avoid cases where we changed ASPM state on pre-PCIe 1.1 devices. This skipped the secondary function of pcie_aspm_sanity_check which was to avoid us enabling ASPM on devices that had non-PCIe children, causing trouble later on. Move the aspm_disabled check so we continue to honour that scenario. Addresses https://bugzilla.kernel.org/show_bug.cgi?id=42979 and http://bugs.debian.org/665420 Reported-by: Romain Francoise <romain@orebokech.com> # kernel panic Reported-by: Chris Holland <bandidoirlandes@gmail.com> # disk detection trouble Signed-off-by: Matthew Garrett <mjg@redhat.com> Tested-by: Hatem Masmoudi <hatem.masmoudi@gmail.com> # Dell Latitude E5520 Tested-by: janek <jan0x6c@gmail.com> # pata_jmicron with JMB362/JMB363 [jn: with more symptoms in log message] Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Yoshii Takashi authored
commit 49d4bcad upstream. When DMA is enabled, sh-sci transfer begins with uart_start() sci_start_tx() if (cookie_tx < 0) schedule_work() Then, starts DMA when wq scheduled, -- (A) process_one_work() work_fn_rx() cookie_tx = desc->submit_tx() And finishes when DMA transfer ends, -- (B) sci_dma_tx_complete() async_tx_ack() cookie_tx = -EINVAL (possible another schedule_work()) This A to B sequence is not reentrant, since controlling variables (for example, cookie_tx above) are not queues nor lists. So, they must be invoked as A B A B..., otherwise results in kernel crash. To ensure the sequence, sci_start_tx() seems to test if cookie_tx < 0 (represents "not used") to call schedule_work(). But cookie_tx will not be set (to a cookie, also means "used") until in the middle of work queue scheduled function work_fn_tx(). This gap between the test and set allows the breakage of the sequence under the very frequently call of uart_start(). Another gap between async_tx_ack() and another schedule_work() results in the same issue, too. This patch introduces a new condition "cookie_tx == 0" just to mark it is "busy" and assign it within spin-locked region to fill the gaps. Signed-off-by: Takashi Yoshii <takashi.yoshii.zj@renesas.com> Reviewed-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: Paul Mundt <lethal@linux-sh.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dan Carpenter authored
commit 6d8d1749 upstream. There is no point in passing a zero length string here and quite a few of that cache_parse() implementations will Oops if count is zero. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Chris Metcalf authored
commit 1631fcea upstream. <asm-generic/unistd.h> was set up to use sys_sendfile() for the 32-bit compat API instead of sys_sendfile64(), but in fact the right thing to do is to use sys_sendfile64() in all cases. The 32-bit sendfile64() API in glibc uses the sendfile64 syscall, so it has to be capable of doing full 64-bit operations. But the sys_sendfile() kernel implementation has a MAX_NON_LFS test in it which explicitly limits the offset to 2^32. So, we need to use the sys_sendfile64() implementation in the kernel for this case. Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Chris Metcalf <cmetcalf@tilera.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dan Carpenter authored
commit 8f0750f1 upstream. These are used as offsets into an array of GDT_ENTRY_TLS_ENTRIES members so GDT_ENTRY_TLS_ENTRIES is one past the end of the array. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Link: http://lkml.kernel.org/r/20120324075250.GA28258@elgon.mountainSigned-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alok Kataria authored
commit 57779dc2 upstream. While running the latest Linux as guest under VMware in highly over-committed situations, we have seen cases when the refined TSC algorithm fails to get a valid tsc_start value in tsc_refine_calibration_work from multiple attempts. As a result the kernel keeps on scheduling the tsc_irqwork task for later. Subsequently after several attempts when it gets a valid start value it goes through the refined calibration and either bails out or uses the new results. Given that the kernel originally read the TSC frequency from the platform, which is the best it can get, I don't think there is much value in refining it. So for systems which get the TSC frequency from the platform we should skip the refined tsc algorithm. We can use the TSC_RELIABLE cpu cap flag to detect this, right now it is set only on VMware and for Moorestown Penwell both of which have there own TSC calibration methods. Signed-off-by: Alok N Kataria <akataria@vmware.com> Cc: John Stultz <johnstul@us.ibm.com> Cc: Dirk Brandewie <dirk.brandewie@gmail.com> Cc: Alan Cox <alan@linux.intel.com> [jstultz: Reworked to simply not schedule the refining work, rather then scheduling the work and bombing out later] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
NeilBrown authored
commit de5b8e8e upstream. If you try to set grace_period or timeout via a module parameter to lockd, and do this on a big-endian machine where sizeof(int) != sizeof(unsigned long) it won't work. This number given will be effectively shifted right by the difference in those two sizes. So cast kp->arg properly to get correct result. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steffen Klassert authored
[ Upstream commit 1265fd61 ] We call the wrong replay notify function when we use ESN replay handling. This leads to the fact that we don't send notifications if we use ESN. Fix this by calling the registered callbacks instead of xfrm_replay_notify(). Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
stephen hemminger authored
[ Upstream commit 5676cc7b ] Some BIOS's don't setup power management correctly (what else is new) and don't allow use of PCI Express power control. Add a special exception module parameter to allow working around this issue. Based on slightly different patch by Knut Petersen. Reported-by: Arkadiusz Miskiewicz <arekm@maven.pl> Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dave Jones authored
[ Upstream commit a6506e14 ] no socket layer outputs a message for this error and neither should rds. Signed-off-by: Dave Jones <davej@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit 2a2a459e ] napi->skb is allocated in napi_get_frags() using netdev_alloc_skb_ip_align(), with a reserve of NET_SKB_PAD + NET_IP_ALIGN bytes. However, when such skb is recycled in napi_reuse_skb(), it ends with a reserve of NET_IP_ALIGN which is suboptimal. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit 94f826b8 ] Commit f2c31e32 (net: fix NULL dereferences in check_peer_redir() ) added a regression in rt6_fill_node(), leading to rcu_read_lock() imbalance. Thats because NLA_PUT() can make a jump to nla_put_failure label. Fix this by using nla_put() Many thanks to Ben Greear for his help Reported-by: Ben Greear <greearb@candelatech.com> Reported-by: Dave Jones <davej@redhat.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Tested-by: Ben Greear <greearb@candelatech.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Dumazet authored
[ Upstream commit dc72d99d ] Matt Evans spotted that x86 bpf_jit was incorrectly handling negative constant offsets in BPF_S_LDX_B_MSH instruction. We need to abort JIT compilation like we do in common_load so that filter uses the interpreter code and can call __load_pointer() Reference: http://lists.openwall.net/netdev/2011/07/19/11 Thanks to Indan Zupancic to bring back this issue. Reported-by: Matt Evans <matt@ozlabs.org> Reported-by: Indan Zupancic <indan@nul.nu> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Benjamin LaHaise authored
[ Upstream commit bbdb32cb ] While testing L2TP functionality, I came across a bug in getsockname(). The IP address returned within the pppol2tp_addr's addr memember was not being set to the IP address in use. This bug is caused by using inet_sk() on the wrong socket (the L2TP socket rather than the underlying UDP socket), and was likely introduced during the addition of L2TPv3 support. Signed-off-by: Benjamin LaHaise <bcrl@kvack.org> Signed-off-by: James Chapman <jchapman@katalix.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dave Airlie authored
commit 3fa016a0 upstream. Looking at hibernate overwriting I though it looked like a cursor, so I tracked down this missing piece to stop the cursor blink timer. I've no idea if this is sufficient to fix the hibernate problems people are seeing, but please test it. Both radeon and nouveau have done this for a long time. I've run this personally all night hib/resume cycles with no fails. Reviewed-by: Keith Packard <keithp@keithp.com> Reported-by: Petr Tesarik <kernel@tesarici.cz> Reported-by: Stanislaw Gruszka <sgruszka@redhat.com> Reported-by: Lots of misc segfaults after hibernate across the world. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=37142Tested-by: Dave Airlie <airlied@redhat.com> Tested-by: Bojan Smojver <bojan@rexursive.com> Tested-by: Andreas Hartmann <andihartmann@01019freenet.de> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bing Zhao authored
commit fa0fb93f upstream. For high-speed/super-speed isochronous endpoints, the bInterval value is used as exponent, 2^(bInterval-1). Luckily we have usb_fill_int_urb() function that handles it correctly. So we just call this function to fill in the RX URB. Cc: Marcel Holtmann <marcel@holtmann.org> Signed-off-by: Bing Zhao <bzhao@marvell.com> Acked-by: Marcel Holtmann <marcel@holtmann.org> Signed-off-by: Gustavo F. Padovan <padovan@profusion.mobi> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sasha Levin authored
commit f946eeb9 upstream. Module size was limited to 64MB, this was legacy limitation due to vmalloc() which was removed a while ago. Limiting module size to 64MB is both pointless and affects real world use cases. Cc: Tim Abbott <tim.abbott@oracle.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christoph Lameter authored
commit 66c4c35c upstream. sysfs_slab_add() calls various sysfs functions that actually may end up in userspace doing all sorts of things. Release the slub_lock after adding the kmem_cache structure to the list. At that point the address of the kmem_cache is not known so we are guaranteed exlusive access to the following modifications to the kmem_cache structure. If the sysfs_slab_add fails then reacquire the slub_lock to remove the kmem_cache structure from the list. Reported-by: Sasha Levin <levinsasha928@gmail.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Kara authored
commit d97d32ed upstream. When an IO error happens during inode deletion run from xlog_recover_process_iunlinks() filesystem gets shutdown. Thus any subsequent attempt to read buffers fails. Code in xlog_recover_process_iunlinks() does not count with the fact that read of a buffer which was read a while ago can really fail which results in the oops on agi = XFS_BUF_TO_AGI(agibp); Fix the problem by cleaning up the buffer handling in xlog_recover_process_iunlinks() as suggested by Dave Chinner. We release buffer lock but keep buffer reference to AG buffer. That is enough for buffer to stay pinned in memory and we don't have to call xfs_read_agi() all the time. Signed-off-by: Jan Kara <jack@suse.cz> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Ben Myers <bpm@sgi.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Masanari Iida authored
commit 8da00edc upstream. Fix typo in drivers/video/backlight/tosa_lcd.c "tosa_lcd_reume" should be "tosa_lcd_resume". Signed-off-by: Masanari Iida <standby24x7@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Andrei Warkentin authored
commit aadbe266 upstream. Call the correct exit function on failure in dm_exception_store_init. Signed-off-by: Andrei Warkentin <andrey.warkentin@gmail.com> Acked-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mikulas Patocka authored
commit 72c6e7af upstream. Always set io->error to -EIO when an error is detected in dm-crypt. There were cases where an error code would be set only if we finish processing the last sector. If there were other encryption operations in flight, the error would be ignored and bio would be returned with success as if no error happened. This bug is present in kcryptd_crypt_write_convert, kcryptd_crypt_read_convert and kcryptd_async_done. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Reviewed-by: Milan Broz <mbroz@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mikulas Patocka authored
commit aeb2deae upstream. This patch fixes a possible deadlock in dm-crypt's mempool use. Currently, dm-crypt reserves a mempool of MIN_BIO_PAGES reserved pages. It allocates first MIN_BIO_PAGES with non-failing allocation (the allocation cannot fail and waits until the mempool is refilled). Further pages are allocated with different gfp flags that allow failing. Because allocations may be done in parallel, this code can deadlock. Example: There are two processes, each tries to allocate MIN_BIO_PAGES and the processes run simultaneously. It may end up in a situation where each process allocates (MIN_BIO_PAGES / 2) pages. The mempool is exhausted. Each process waits for more pages to be freed to the mempool, which never happens. To avoid this deadlock scenario, this patch changes the code so that only the first page is allocated with non-failing gfp mask. Allocation of further pages may fail. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Milan Broz <mbroz@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Kara authored
commit a0391a3a upstream. udf_release_file() can be called from munmap() path with mmap_sem held. Thus we cannot take i_mutex there because that ranks above mmap_sem. Luckily, i_mutex is not needed in udf_release_file() anymore since protection by i_data_sem is enough to protect from races with write and truncate. Reported-by: Al Viro <viro@ZenIV.linux.org.uk> Reviewed-by: Namjae Jeon <linkinjeon@gmail.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michel Lespinasse authored
commit b18dafc8 upstream. In d_materialise_unique() there are 3 subcases to the 'aliased dentry' case; in two subcases the inode i_lock is properly released but this does not occur in the -ELOOP subcase. This seems to have been introduced by commit 18367501 ("fix loop checks in d_materialise_unique()"). Signed-off-by: Michel Lespinasse <walken@google.com> [ Added a comment, and moved the unlock to where we generate the -ELOOP, which seems to be more natural. You probably can't actually trigger this without a buggy network file server - d_materialize_unique() is for finding aliases on non-local filesystems, and the d_ancestor() case is for a hardlinked directory loop. But we should be robust in the case of such buggy servers anyway. ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Theodore Ts'o authored
commit 31d4f3a2 upstream. Explicitly test for an extent whose length is zero, and flag that as a corrupted extent. This avoids a kernel BUG_ON assertion failure. Tested: Without this patch, the file system image found in tests/f_ext_zero_len/image.gz in the latest e2fsprogs sources causes a kernel panic. With this patch, an ext4 file system error is noted instead, and the file system is marked as being corrupted. https://bugzilla.kernel.org/show_bug.cgi?id=42859Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lukas Czerner authored
commit 3d2b1582 upstream. Ext4 does not support data journalling with delayed allocation enabled. We even do not allow to mount the file system with delayed allocation and data journalling enabled, however it can be set via FS_IOC_SETFLAGS so we can hit the inode with EXT4_INODE_JOURNAL_DATA set even on file system mounted with delayed allocation (default) and that's where problem arises. The easies way to reproduce this problem is with the following set of commands: mkfs.ext4 /dev/sdd mount /dev/sdd /mnt/test1 dd if=/dev/zero of=/mnt/test1/file bs=1M count=4 chattr +j /mnt/test1/file dd if=/dev/zero of=/mnt/test1/file bs=1M count=4 conv=notrunc chattr -j /mnt/test1/file Additionally it can be reproduced quite reliably with xfstests 272 and 269. In fact the above reproducer is a part of test 272. To fix this we should ignore the EXT4_INODE_JOURNAL_DATA inode flag if the file system is mounted with delayed allocation. This can be easily done by fixing ext4_should_*_data() functions do ignore data journal flag when delalloc is set (suggested by Ted). We also have to set the appropriate address space operations for the inode (again, ignoring data journal flag if delalloc enabled). Additionally this commit introduces ext4_inode_journal_mode() function because ext4_should_*_data() has already had a lot of common code and this change is putting it all into one function so it is easier to read. Successfully tested with xfstests in following configurations: delalloc + data=ordered delalloc + data=writeback data=journal nodelalloc + data=ordered nodelalloc + data=writeback nodelalloc + data=journal Signed-off-by: Lukas Czerner <lczerner@redhat.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Sandeen authored
commit 15291164 upstream. journal_unmap_buffer()'s zap_buffer: code clears a lot of buffer head state ala discard_buffer(), but does not touch _Delay or _Unwritten as discard_buffer() does. This can be problematic in some areas of the ext4 code which assume that if they have found a buffer marked unwritten or delay, then it's a live one. Perhaps those spots should check whether it is mapped as well, but if jbd2 is going to tear down a buffer, let's really tear it down completely. Without this I get some fsx failures on sub-page-block filesystems up until v3.2, at which point 4e96b2db and 189e868f make the failures go away, because buried within that large change is some more flag clearing. I still think it's worth doing in jbd2, since ->invalidatepage leads here directly, and it's the right place to clear away these flags. Signed-off-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-