- 29 Oct, 2010 40 commits
-
-
Michael Neuling authored
commit 54a83404 upstream. In f761622e we changed early_setup_secondary so it's called using the proper kernel stack rather than the emergency one. Unfortunately, this stack pointer can't be used when translation is off on PHYP as this stack pointer might be outside the RMO. This results in the following on all non zero cpus: cpu 0x1: Vector: 300 (Data Access) at [c00000001639fd10] pc: 000000000001c50c lr: 000000000000821c sp: c00000001639ff90 msr: 8000000000001000 dar: c00000001639ffa0 dsisr: 42000000 current = 0xc000000016393540 paca = 0xc000000006e00200 pid = 0, comm = swapper The original patch was only tested on bare metal system, so it never caught this problem. This changes __secondary_start so that we calculate the new stack pointer but only start using it after we've called early_setup_secondary. With this patch, the above problem goes away. Signed-off-by:
Michael Neuling <mikey@neuling.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Matt Evans authored
commit f761622e upstream. As early setup calls down to slb_initialize(), we must have kstack initialised before checking "should we add a bolted SLB entry for our kstack?" Failing to do so means stack access requires an SLB miss exception to refill an entry dynamically, if the stack isn't accessible via SLB(0) (kernel text & static data). It's not always allowable to take such a miss, and intermittent crashes will result. Primary CPUs don't have this issue; an SLB entry is not bolted for their stack anyway (as that lives within SLB(0)). This patch therefore only affects the init of secondaries. Signed-off-by:
Matt Evans <matt@ozlabs.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Ben Hutchings authored
This was fixed in mainline by the interface change made in commit f9dcbcc9. After walking the multicast list to set up the hash filter, this function will walk off the end of the list when filling the exact-match entries. This was fixed in mainline by the interface change made in commit f9dcbcc9. Reported-by: spamalot@hispeed.ch Reference: https://bugzilla.kernel.org/show_bug.cgi?id=15355Reported-by:
Jason Heeris <jason.heeris@gmail.com> Reference: http://bugs.debian.org/600155Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Florian Fainelli authored
commit 3bcf8229 upstream. As reported in <https://bugzilla.kernel.org/show_bug.cgi?id=15355>, r6040_ multicast_list currently crashes. This is due a wrong maximum of multicast entries. This patch fixes the following issues with multicast: - number of maximum entries if off-by-one (4 instead of 3) - the writing of the hash table index is not necessary and leads to invalid values being written into the MCR1 register, so the MAC is simply put in a non coherent state - when we exceed the maximum number of mutlticast address, writing the broadcast address should be done in registers MID_1{L,M,H} instead of MID_O{L,M,H}, otherwise we would loose the adapter's MAC address [bwh: Adjust for 2.6.32; should also apply to 2.6.27] Signed-off-by:
Florian Fainelli <florian@openwrt.org> Signed-off-by:
David S. Miller <davem@davemloft.net> Cc: Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
FUJITA Tomonori authored
commit 47897160 upstream. bsg incorrectly returns sg's masked_status value for device_status. [jejb: fix up expression logic] Reported-by:
Douglas Gilbert <dgilbert@interlog.com> Signed-off-by:
FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by:
James Bottomley <James.Bottomley@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Stanislaw Gruszka authored
commit aeb19f60 upstream. We have fedora bug report where driver fail to initialize after suspend/resume because of memory allocation errors: https://bugzilla.redhat.com/show_bug.cgi?id=629158 To fix use GFP_KERNEL allocation where possible. Tested-by:
Neal Becker <ndbecker2@gmail.com> Signed-off-by:
Stanislaw Gruszka <sgruszka@redhat.com> Acked-by:
Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Stanislaw Gruszka authored
commit 392bd0cb upstream. Skge devices installed on some Gigabyte motherboards are not able to perform 64 dma correctly due to board PCI implementation, so limit DMA to 32bit if such boards are detected. Bug was reported here: https://bugzilla.redhat.com/show_bug.cgi?id=447489Signed-off-by:
Stanislaw Gruszka <sgruszka@redhat.com> Tested-by:
Luya Tshimbalanga <luya@fedoraproject.org> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Jianzhao Wang authored
[ Upstream commit ae2688d5 ] Blackhole routes are used when xfrm_lookup() returns -EREMOTE (error triggered by IKE for example), hence this kind of route is always temporary and so we should check if a better route exists for next packets. Bug has been introduced by commit d11a4dc1. Signed-off-by:
Jianzhao Wang <jianzhao.wang@6wind.com> Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
David S. Miller authored
[ Upstream commit 9828e6e6 ] Just use explicit casts, since we really can't change the types of structures exported to userspace which have been around for 15 years or so. Reported-by:
Dan Rosenberg <dan.j.rosenberg@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Eric Dumazet authored
[ Upstream commit 7e96dc70 ] skb->truesize is set in core network. Dont change it unless dealing with fragments. Signed-off-by:
Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Tom Marshall authored
[ Upstream commit a4d25803 ] If a RST comes in immediately after checking sk->sk_err, tcp_poll will return POLLIN but not POLLOUT. Fix this by checking sk->sk_err at the end of tcp_poll. Additionally, ensure the correct order of operations on SMP machines with memory barriers. Signed-off-by:
Tom Marshall <tdm.code@gmail.com> Signed-off-by:
Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Kees Cook authored
[ Upstream commit b00916b1 ] Several other ethtool functions leave heap uncleared (potentially) by drivers. Some interfaces appear safe (eeprom, etc), in that the sizes are well controlled. In some situations (e.g. unchecked error conditions), the heap will remain unchanged in areas before copying back to userspace. Note that these are less of an issue since these all require CAP_NET_ADMIN. Cc: stable@kernel.org Signed-off-by:
Kees Cook <kees.cook@canonical.com> Acked-by:
Ben Hutchings <bhutchings@solarflare.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Eric Dumazet authored
[ Upstream commit 3d13008e ] Special care should be taken when slow path is hit in ip_fragment() : When walking through frags, we transfert truesize ownership from skb to frags. Then if we hit a slow_path condition, we must undo this or risk uncharging frags->truesize twice, and in the end, having negative socket sk_wmem_alloc counter, or even freeing socket sooner than expected. Many thanks to Nick Bowler, who provided a very clean bug report and test program. Thanks to Jarek for reviewing my first patch and providing a V2 While Nick bisection pointed to commit 2b85a34e (net: No more expensive sock_hold()/sock_put() on each tx), underlying bug is older (2.6.12-rc5) A side effect is to extend work done in commit b2722b1c (ip_fragment: also adjust skb->truesize for packets not owned by a socket) to ipv6 as well. Reported-and-bisected-by:
Nick Bowler <nbowler@elliptictech.com> Tested-by:
Nick Bowler <nbowler@elliptictech.com> Signed-off-by:
Eric Dumazet <eric.dumazet@gmail.com> CC: Jarek Poplawski <jarkao2@gmail.com> CC: Patrick McHardy <kaber@trash.net> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Maciej Żenczykowski authored
[ Upstream commit ae878ae2 ] Signed-off-by:
Maciej Żenczykowski <maze@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Kumar Sanghvi authored
[ Upstream commit a91e7d47 ] Retrieve the header after doing pskb_may_pull since, pskb_may_pull could change the buffer structure. This is based on the comment given by Eric Dumazet on Phonet Pipe controller patch for a similar problem. Signed-off-by:
Kumar Sanghvi <kumar.sanghvi@stericsson.com> Acked-by:
Linus Walleij <linus.walleij@stericsson.com> Acked-by:
Eric Dumazet <eric.dumazet@gmail.com> Acked-by:
Rémi Denis-Courmont <remi.denis-courmont@nokia.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Nagendra Tomar authored
[ Upstream commit 482964e5 ] This patch fixes the condition (3rd arg) passed to sk_wait_event() in sk_stream_wait_memory(). The incorrect check in sk_stream_wait_memory() causes the following soft lockup in tcp_sendmsg() when the global tcp memory pool has exhausted. >>> snip <<< localhost kernel: BUG: soft lockup - CPU#3 stuck for 11s! [sshd:6429] localhost kernel: CPU 3: localhost kernel: RIP: 0010:[sk_stream_wait_memory+0xcd/0x200] [sk_stream_wait_memory+0xcd/0x200] sk_stream_wait_memory+0xcd/0x200 localhost kernel: localhost kernel: Call Trace: localhost kernel: [sk_stream_wait_memory+0x1b1/0x200] sk_stream_wait_memory+0x1b1/0x200 localhost kernel: [<ffffffff802557c0>] autoremove_wake_function+0x0/0x40 localhost kernel: [ipv6:tcp_sendmsg+0x6e6/0xe90] tcp_sendmsg+0x6e6/0xce0 localhost kernel: [sock_aio_write+0x126/0x140] sock_aio_write+0x126/0x140 localhost kernel: [xfs:do_sync_write+0xf1/0x130] do_sync_write+0xf1/0x130 localhost kernel: [<ffffffff802557c0>] autoremove_wake_function+0x0/0x40 localhost kernel: [hrtimer_start+0xe3/0x170] hrtimer_start+0xe3/0x170 localhost kernel: [vfs_write+0x185/0x190] vfs_write+0x185/0x190 localhost kernel: [sys_write+0x50/0x90] sys_write+0x50/0x90 localhost kernel: [system_call+0x7e/0x83] system_call+0x7e/0x83 >>> snip <<< What is happening is, that the sk_wait_event() condition passed from sk_stream_wait_memory() evaluates to true for the case of tcp global memory exhaustion. This is because both sk_stream_memory_free() and vm_wait are true which causes sk_wait_event() to *not* call schedule_timeout(). Hence sk_stream_wait_memory() returns immediately to the caller w/o sleeping. This causes the caller to again try allocation, which again fails and again calls sk_stream_wait_memory(), and so on. [ Bug introduced by commit c1cbe4b7 ("[NET]: Avoid atomic xchg() for non-error case") -DaveM ] Signed-off-by:
Nagendra Singh Tomar <tomer_iisc@yahoo.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
David S. Miller authored
[ Upstream commit 01db403c ] Fixes kernel bugzilla #16603 tcp_sendmsg() truncates iov_len to an 'int' which a 4GB write to write zero bytes, for example. There is also the problem higher up of how verify_iovec() works. It wants to prevent the total length from looking like an error return value. However it does this using 'int', but syscalls return 'long' (and thus signed 64-bit on 64-bit machines). So it could trigger false-positives on 64-bit as written. So fix it to use 'long'. Reported-by:
Olaf Bonorden <bono@onlinehome.de> Reported-by:
Daniel Büse <dbuese@gmx.de> Reported-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Ulrich Weber authored
[ Upstream commit 94e22389 ] dont compare ECN and IP Precedence bits in find_bundle and use ECN bit stripped TOS value in xfrm_lookup Signed-off-by:
Ulrich Weber <uweber@astaro.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Dave Airlie authored
commit f459ffbd upstream. fixes https://bugzilla.kernel.org/show_bug.cgi?id=19012Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Linus Torvalds authored
commit 799c1055 upstream. Don't try to "optimize" rds_page_copy_user() by using kmap_atomic() and the unsafe atomic user mode accessor functions. It's actually slower than the straightforward code on any reasonable modern CPU. Back when the code was written (although probably not by the time it was actually merged, though), 32-bit x86 may have been the dominant architecture. And there kmap_atomic() can be a lot faster than kmap() (unless you have very good locality, in which case the virtual address caching by kmap() can overcome all the downsides). But these days, x86-64 may not be more populous, but it's getting there (and if you care about performance, it's definitely already there - you'd have upgraded your CPU's already in the last few years). And on x86-64, the non-kmap_atomic() version is faster, simply because the code is simpler and doesn't have the "re-try page fault" case. People with old hardware are not likely to care about RDS anyway, and the optimization for the 32-bit case is simply buggy, since it doesn't verify the user addresses properly. Reported-by:
Dan Rosenberg <drosenberg@vsecurity.com> Acked-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Borislav Petkov authored
commit 6dcbfe4f upstream. This fixes possible cases of not collecting valid error info in the MCE error thresholding groups on F10h hardware. The current code contains a subtle problem of checking only the Valid bit of MSR0000_0413 (which is MC4_MISC0 - DRAM thresholding group) in its first iteration and breaking out if the bit is cleared. But (!), this MSR contains an offset value, BlkPtr[31:24], which points to the remaining MSRs in this thresholding group which might contain valid information too. But if we bail out only after we checked the valid bit in the first MSR and not the block pointer too, we miss that other information. The thing is, MC4_MISC0[BlkPtr] is not predicated on MCi_STATUS[MiscV] or MC4_MISC0[Valid] and should be checked prior to iterating over the MCI_MISCj thresholding group, irrespective of the MC4_MISC0[Valid] setting. Signed-off-by:
Borislav Petkov <borislav.petkov@amd.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Luca Tettamanti authored
commit ec5a32f6 upstream. adapter->cmb.cmb is initialized when the device is opened and freed when it's closed. Accessing it unconditionally during resume results either in a crash (NULL pointer dereference, when the interface has not been opened yet) or data corruption (when the interface has been used and brought down adapter->cmb.cmb points to a deallocated memory area). Signed-off-by:
Luca Tettamanti <kronos.it@gmail.com> Acked-by:
Chris Snook <chris.snook@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Johannes Berg authored
commit df6d0230 upstream. When a driver doesn't fill the entire buffer, old heap contents may remain, and if it also doesn't update the length properly, this old heap content will be copied back to userspace. It is very unlikely that this happens in any of the drivers using private ioctls since it would show up as junk being reported by iwpriv, but it seems better to be safe here, so use kzalloc. Reported-by:
Jeff Mahoney <jeffm@suse.com> Signed-off-by:
Johannes Berg <johannes.berg@intel.com> Signed-off-by:
John W. Linville <linville@tuxdriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Joel Becker authored
commit 1fc8a117 upstream. ocfs2 fast symlinks are NUL terminated strings stored inline in the inode data area. However, disk corruption or a local attacker could, in theory, remove that NUL. Because we're using strlen() (my fault, introduced in a731d1 when removing vfs_follow_link()), we could walk off the end of that string. Signed-off-by:
Joel Becker <joel.becker@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Yegor Yefremov authored
commit 6abb930a upstream. ret is still -1, if during the polling read_byte() returns at once with I2C_PCA_CON_SI set. So ret > 0 would lead *_waitforcompletion() to return 0, in spite of the proper behavior. The routine was rewritten, so that ret has always a proper value, before returning. Signed-off-by:
Yegor Yefremov <yegorslists@googlemail.com> Reviewed-by:
Wolfram Sang <w.sang@pengutronix.de> Signed-off-by:
Jean Delvare <khali@linux-fr.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Salman Qazi authored
commit f13d4f97 upstream. The race is described as follows: CPU X CPU Y remove_hrtimer // state & QUEUED == 0 timer->state = CALLBACK unlock timer base timer->f(n) //very long hrtimer_start lock timer base remove_hrtimer // no effect hrtimer_enqueue timer->state = CALLBACK | QUEUED unlock timer base hrtimer_start lock timer base remove_hrtimer mode = INACTIVE // CALLBACK bit lost! switch_hrtimer_base CALLBACK bit not set: timer->base changes to a different CPU. lock this CPU's timer base The bug was introduced with commit ca109491 (hrtimer: removing all ur callback modes) in 2.6.29 [ tglx: Feed new state via local variable and add a comment. ] Signed-off-by:
Salman Qazi <sqazi@google.com> Cc: akpm@linux-foundation.org Cc: Peter Zijlstra <peterz@infradead.org> LKML-Reference: <20101012142351.8485.21823.stgit@dungbeetle.mtv.corp.google.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Simon Guinot authored
commit cc60f887 upstream. When using simultaneously the two DMA channels on a same engine, some transfers are never completed. For example, an endless lock can occur while writing heavily on a RAID5 array (with async-tx offload support enabled). Note that this issue can also be reproduced by using the DMA test client. On a same engine, the interrupt cause register is shared between two DMA channels. This patch make sure that the cause bit is only cleared for the requested channel. Signed-off-by:
Simon Guinot <sguinot@lacie.com> Tested-by:
Luc Saillard <luc@saillard.org> Acked-by:
saeed bishara <saeed.bishara@gmail.com> Signed-off-by:
Dan Williams <dan.j.williams@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Steven Rostedt authored
commit d0134324 upstream. Time stamps for the ring buffer are created by the difference between two events. Each page of the ring buffer holds a full 64 bit timestamp. Each event has a 27 bit delta stamp from the last event. The unit of time is nanoseconds, so 27 bits can hold ~134 milliseconds. If two events happen more than 134 milliseconds apart, a time extend is inserted to add more bits for the delta. The time extend has 59 bits, which is good for ~18 years. Currently the time extend is committed separately from the event. If an event is discarded before it is committed, due to filtering, the time extend still exists. If all events are being filtered, then after ~134 milliseconds a new time extend will be added to the buffer. This can only happen till the end of the page. Since each page holds a full timestamp, there is no reason to add a time extend to the beginning of a page. Time extends can only fill a page that has actual data at the beginning, so there is no fear that time extends will fill more than a page without any data. When reading an event, a loop is made to skip over time extends since they are only used to maintain the time stamp and are never given to the caller. As a paranoid check to prevent the loop running forever, with the knowledge that time extends may only fill a page, a check is made that tests the iteration of the loop, and if the iteration is more than the number of time extends that can fit in a page a warning is printed and the ring buffer is disabled (all of ftrace is also disabled with it). There is another event type that is called a TIMESTAMP which can hold 64 bits of data in the theoretical case that two events happen 18 years apart. This code has not been implemented, but the name of this event exists, as well as the structure for it. The size of a TIMESTAMP is 16 bytes, where as a time extend is only 8 bytes. The macro used to calculate how many time extends can fit on a page used the TIMESTAMP size instead of the time extend size cutting the amount in half. The following test case can easily trigger the warning since we only need to have half the page filled with time extends to trigger the warning: # cd /sys/kernel/debug/tracing/ # echo function > current_tracer # echo 'common_pid < 0' > events/ftrace/function/filter # echo > trace # echo 1 > trace_marker # sleep 120 # cat trace Enabling the function tracer and then setting the filter to only trace functions where the process id is negative (no events), then clearing the trace buffer to ensure that we have nothing in the buffer, then write to trace_marker to add an event to the beginning of a page, sleep for 2 minutes (only 35 seconds is probably needed, but this guarantees the bug), and then finally reading the trace which will trigger the bug. This patch fixes the typo and prevents the false positive of that warning. Reported-by:
Hans J. Koch <hjk@linutronix.de> Tested-by:
Hans J. Koch <hjk@linutronix.de> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Tejun Heo authored
commit 47526903 upstream. Commit f81f2f7c (ubd: drop unnecessary rq->sector manipulation) dropped request->sector manipulation in preparation for global request handling cleanup; unfortunately, it incorrectly assumed that the updated sector wasn't being used. ubd tries to issue as many requests as possible to io_thread. When issuing fails due to memory pressure or other reasons, the device is put on the restart list and issuing stops. On IO completion, devices on the restart list are scanned and IO issuing is restarted. ubd issues IOs sg-by-sg and issuing can be stopped in the middle of a request, so each device on the restart queue needs to remember where to restart in its current request. ubd needs to keep track of the issue position itself because, * blk_rq_pos(req) is now updated by the block layer to keep track of _completion_ position. * Multiple io_req's for the current request may be in flight, so it's difficult to tell where blk_rq_pos(req) currently is. Add ubd->rq_pos to keep track of the issue position and use it to correctly restart io_req issue. Signed-off-by:
Tejun Heo <tj@kernel.org> Reported-by:
Richard Weinberger <richard@nod.at> Tested-by:
Richard Weinberger <richard@nod.at> Tested-by:
Chris Frey <cdfrey@foursquare.net> Signed-off-by:
Jens Axboe <jaxboe@fusionio.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Thomas Gleixner authored
commit 1cf180c9 upstream. free_irq_cfg() is not freeing the cpumask_vars in irq_cfg. Fixing this triggers a use after free caused by the fact that copying struct irq_cfg is done with memcpy, which copies the pointer not the cpumask. Fix both places. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: Yinghai Lu <yhlu.kernel@gmail.com> LKML-Reference: <alpine.LFD.2.00.1009282052570.2416@localhost6.localdomain6> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Thomas Gleixner authored
commit 02198962 upstream. create_irq() returns -1 if the interrupt allocation failed, but the code checks for irq == 0. Use create_irq_nr() instead. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: Venkatesh Pallipadi <venki@google.com> LKML-Reference: <alpine.LFD.2.00.1009282310360.2416@localhost6.localdomain6> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Kenneth Waters authored
commit d2520a42 upstream. Fixed JSIOCSAXMAP ioctl to update absmap, the map from hardware axis to event axis in addition to abspam. This fixes a regression introduced by 999b874f. Signed-off-by:
Kenneth Waters <kwwaters@gmail.com> Signed-off-by:
Dmitry Torokhov <dtor@mail.ru> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Dmitri Belimov authored
commit 08be64be upstream. Some customers has problem with quality of DVB-T https://bugs.launchpad.net/ubuntu/+source/linux/+bug/446575 After this patch http://patchwork.kernel.org/patch/23345/ This is patch for fix regression with DVB-T. Tested with many people. Signed-off-by:
Alexey Osipov <lion-simba@pridelands.ru> Signed-off-by:
Beholder Intl. Ltd. Dmitry Belimov <d.belimov@gmail.com> Signed-off-by:
Mauro Carvalho Chehab <mchehab@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Mauro Carvalho Chehab authored
commit c10469c6 upstream. As reported by: Carlos Americo Domiciano <c_domiciano@yahoo.com.br>: [ 220.033500] cx231xx v4l2 driver loaded. [ 220.033571] cx231xx #0: New device Conexant Corporation Polaris AV Capturb @ 480 Mbps (1554:5010) with 6 interfaces [ 220.033577] cx231xx #0: registering interface 0 [ 220.033591] cx231xx #0: registering interface 1 [ 220.033654] cx231xx #0: registering interface 6 [ 220.033910] cx231xx #0: Identified as Unknown CX231xx video grabber (card=0) [ 220.033946] BUG: unable to handle kernel NULL pointer dereference at (null) [ 220.033955] IP: [<ffffffffa0d3c8bd>] cx231xx_pre_card_setup+0x5d/0xb0 [cx231xx] Thanks-to: Carlos Americo Domiciano <c_domiciano@yahoo.com.br> Signed-off-by:
Mauro Carvalho Chehab <mchehab@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Linus Torvalds authored
commit 3e645d6b upstream. The compat code for the VIDIOCSMICROCODE ioctl is totally buggered. It's only used by the VIDEO_STRADIS driver, and that one is scheduled to staging and eventually removed unless somebody steps up to maintain it (at which point it should use request_firmware() rather than some magic ioctl). So we'll get rid of it eventually. But in the meantime, the compatibility ioctl code is broken, and this tries to get it to at least limp along (even if Mauro suggested just deleting it entirely, which may be the right thing to do - I don't think the compatibility translation code has ever worked unless you were very lucky). Reported-by:
Kees Cook <kees.cook@canonical.com> Cc: Mauro Carvalho Chehab <mchehab@infradead.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Steven Rostedt authored
commit 258af474 upstream. The guest can use the paravirt clock in kvmclock.c which is used by sched_clock(), which in turn is used by the tracing mechanism for timestamps, which leads to infinite recursion. Disable mcount/tracing for kvmclock.o. Cc: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Avi Kivity <avi@redhat.com> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Jeremy Fitzhardinge authored
commit 9ecd4e16 upstream. When using a paravirt clock, pvclock.c can be used by sched_clock(), which in turn is used by the tracing mechanism for timestamps, which leads to infinite recursion. Disable mcount/tracing for pvclock.o. Signed-off-by:
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> LKML-Reference: <4C9A9A3F.4040201@goop.org> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Joerg Roedel authored
commit 4c894f47 upstream. This patch adds a workaround for an IOMMU BIOS problem to the AMD IOMMU driver. The result of the bug is that the IOMMU does not execute commands anymore when the system comes out of the S3 state resulting in system failure. The bug in the BIOS is that is does not restore certain hardware specific registers correctly. This workaround reads out the contents of these registers at boot time and restores them on resume from S3. The workaround is limited to the specific IOMMU chipset where this problem occurs. Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Joerg Roedel authored
commit 04e0463e upstream. In the __unmap_single function the dma_addr is rounded down to a page boundary before the dma pages are unmapped. The address is later also used to flush the TLB entries for that mapping. But without the offset into the dma page the amount of pages to flush might be miscalculated in the TLB flushing path. This patch fixes this bug by using the original address to flush the TLB. Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Joerg Roedel authored
commit e9bf5197 upstream. This patch moves the setting of the configuration and feature flags out out the acpi table parsing path and moves it into the iommu-enable path. This is needed to reliably fix resume-from-s3. Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-