- 23 Mar, 2011 40 commits
-
-
Michael Neuling authored
commit 60adec62 upstream. When we are crashing, the crashing/primary CPU IPIs the secondaries to turn off IRQs, go into real mode and wait in kexec_wait. While this is happening, the primary tears down all the MMU maps. Unfortunately the primary doesn't check to make sure the secondaries have entered real mode before doing this. On PHYP machines, the secondaries can take a long time shutting down the IRQ controller as RTAS calls are need. These RTAS calls need to be serialised which resilts in the secondaries contending in lock_rtas() and hence taking a long time to shut down. We've hit this on large POWER7 machines, where some secondaries are still waiting in lock_rtas(), when the primary tears down the HPTEs. This patch makes sure all secondaries are in real mode before the primary tears down the MMU. It uses the new kexec_state entry in the paca. It times out if the secondaries don't reach real mode after 10sec. Signed-off-by:
Michael Neuling <mikey@neuling.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> cc: Anton Blanchard <anton@samba.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Michael Neuling authored
commit 1fc711f7 upstream. In kexec_prepare_cpus, the primary CPU IPIs the secondary CPUs to kexec_smp_down(). kexec_smp_down() calls kexec_smp_wait() which sets the hw_cpu_id() to -1. The primary does this while leaving IRQs on which means the primary can take a timer interrupt which can lead to the IPIing one of the secondary CPUs (say, for a scheduler re-balance) but since the secondary CPU now has a hw_cpu_id = -1, we IPI CPU -1... Kaboom! We are hitting this case regularly on POWER7 machines. There is also a second race, where the primary will tear down the MMU mappings before knowing the secondaries have entered real mode. Also, the secondaries are clearing out any pending IPIs before guaranteeing that no more will be received. This changes kexec_prepare_cpus() so that we turn off IRQs in the primary CPU much earlier. It adds a paca flag to say that the secondaries have entered the kexec_smp_down() IPI and turned off IRQs, rather than overloading hw_cpu_id with -1. This new paca flag is again used to in indicate when the secondaries has entered real mode. It also ensures that all CPUs have their IRQs off before we clear out any pending IPI requests (in kexec_cpu_down()) to ensure there are no trailing IPIs left unacknowledged. Signed-off-by:
Michael Neuling <mikey@neuling.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by:
Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> cc: Anton Blanchard <anton@samba.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Stefan Nilsson XK authored
commit 0aab3995 upstream. During redetection of a SDIO card, a request for a new card RCA was submitted to the card, but was then overwritten by the old RCA. This caused the card to be deselected instead of selected when using the incorrect RCA. This bug's been present since the "oldcard" handling was introduced in 2.6.32. Signed-off-by:
Stefan Nilsson XK <stefan.xk.nilsson@stericsson.com> Reviewed-by:
Ulf Hansson <ulf.hansson@stericsson.com> Reviewed-by:
Pawel Wieczorkiewicz <pawel.wieczorkiewicz@stericsson.com> Signed-off-by:
Linus Walleij <linus.walleij@linaro.org> Signed-off-by:
Chris Ball <cjb@laptop.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Roman Fietze authored
commit 6ced9e6b upstream. The struct i2c_board_info member holding the name is "type", not "name". Signed-off-by:
Roman Fietze <roman.fietze@telemotive.de> Signed-off-by:
Jean Delvare <khali@linux-fr.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Thomas Gleixner authored
commit 9804c9ea upstream. The CHECK_IRQ_PER_CPU is wrong, it should be checking irq_to_desc(irq)->status not just irq. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
James Bottomley <James.Bottomley@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Milton Miller authored
commit 723aae25 upstream. Mike Galbraith reported finding a lockup ("perma-spin bug") where the cpumask passed to smp_call_function_many was cleared by other cpu(s) while a cpu was preparing its call_data block, resulting in no cpu to clear the last ref and unlock the block. Having cpus clear their bit asynchronously could be useful on a mask of cpus that might have a translation context, or cpus that need a push to complete an rcu window. Instead of adding a BUG_ON and requiring yet another cpumask copy, just detect the race and handle it. Note: arch_send_call_function_ipi_mask must still handle an empty cpumask because the data block is globally visible before the that arch callback is made. And (obviously) there are no guarantees to which cpus are notified if the mask is changed during the call; only cpus that were online and had their mask bit set during the whole call are guaranteed to be called. Reported-by:
Mike Galbraith <efault@gmx.de> Reported-by:
Jan Beulich <JBeulich@novell.com> Acked-by:
Jan Beulich <jbeulich@novell.com> Signed-off-by:
Milton Miller <miltonm@bga.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Tilman Schmidt authored
commit bc10f967 upstream. Remove the call to tty_ldisc_flush() from the RESULT_NO_CARRIER branch of isdn_tty_modem_result(), as already proposed in commit 00409bb0. This avoids a "sleeping function called from invalid context" BUG when the hardware driver calls the statcallb() callback with command==ISDN_STAT_DHUP in atomic context, which in turn calls isdn_tty_modem_result(RESULT_NO_CARRIER, ~), and from there, tty_ldisc_flush() which may sleep. Signed-off-by:
Tilman Schmidt <tilman@imap.cc> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Shaohua Li authored
commit 4981d01e upstream. According to intel CPU manual, every time PGD entry is changed in i386 PAE mode, we need do a full TLB flush. Current code follows this and there is comment for this too in the code. But current code misses the multi-threaded case. A changed page table might be used by several CPUs, every such CPU should flush TLB. Usually this isn't a problem, because we prepopulate all PGD entries at process fork. But when the process does munmap and follows new mmap, this issue will be triggered. When it happens, some CPUs keep doing page faults: http://marc.info/?l=linux-kernel&m=129915020508238&w=2 Reported-by: Yasunori Goto<y-goto@jp.fujitsu.com> Tested-by: Yasunori Goto<y-goto@jp.fujitsu.com> Reviewed-by:
Rik van Riel <riel@redhat.com> Signed-off-by: Shaohua Li<shaohua.li@intel.com> Cc: Mallick Asit K <asit.k.mallick@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm <linux-mm@kvack.org> LKML-Reference: <1300246649.2337.95.camel@sli10-conroe> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Milton Miller authored
commit 45a57919 upstream. Paul McKenney's review pointed out two problems with the barriers in the 2.6.38 update to the smp call function many code. First, a barrier that would force the func and info members of data to be visible before their consumption in the interrupt handler was missing. This can be solved by adding a smp_wmb between setting the func and info members and setting setting the cpumask; this will pair with the existing and required smp_rmb ordering the cpumask read before the read of refs. This placement avoids the need a second smp_rmb in the interrupt handler which would be executed on each of the N cpus executing the call request. (I was thinking this barrier was present but was not). Second, the previous write to refs (establishing the zero that we the interrupt handler was testing from all cpus) was performed by a third party cpu. This would invoke transitivity which, as a recient or concurrent addition to memory-barriers.txt now explicitly states, would require a full smp_mb(). However, we know the cpumask will only be set by one cpu (the data owner) and any preivous iteration of the mask would have cleared by the reading cpu. By redundantly writing refs to 0 on the owning cpu before the smp_wmb, the write to refs will follow the same path as the writes that set the cpumask, which in turn allows us to keep the barrier in the interrupt handler a smp_rmb instead of promoting it to a smp_mb (which will be be executed by N cpus for each of the possible M elements on the list). I moved and expanded the comment about our (ab)use of the rcu list primitives for the concurrent walk earlier into this function. I considered moving the first two paragraphs to the queue list head and lock, but felt it would have been too disconected from the code. Cc: Paul McKinney <paulmck@linux.vnet.ibm.com> Signed-off-by:
Milton Miller <miltonm@bga.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Milton Miller authored
commit e6cd1e07 upstream. Peter pointed out there was nothing preventing the list_del_rcu in smp_call_function_interrupt from running before the list_add_rcu in smp_call_function_many. Fix this by not setting refs until we have gotten the lock for the list. Take advantage of the wmb in list_add_rcu to save an explicit additional one. I tried to force this race with a udelay before the lock & list_add and by mixing all 64 online cpus with just 3 random cpus in the mask, but was unsuccessful. Still, inspection shows a valid race, and the fix is a extension of the existing protection window in the current code. Reported-by:
Peter Zijlstra <peterz@infradead.org> Signed-off-by:
Milton Miller <miltonm@bga.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Eric Sandeen authored
commit d7433142 upstream. (crossport of 1f7bebb9 by Andreas Schlick <schlick@lavabit.com>) When ext3_dx_add_entry() has to split an index node, it has to ensure that name_len of dx_node's fake_dirent is also zero, because otherwise e2fsck won't recognise it as an intermediate htree node and consider the htree to be corrupted. Signed-off-by:
Eric Sandeen <sandeen@redhat.com> Signed-off-by:
Jan Kara <jack@suse.cz> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Anton Blanchard authored
commit 0837e324 upstream. Events on POWER7 can roll back if a speculative event doesn't eventually complete. Unfortunately in some rare cases they will raise a performance monitor exception. We need to catch this to ensure we reset the PMC. In all cases the PMC will be 256 or less cycles from overflow. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20110309143842.6c22845e@kryten> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Frederic Weisbecker authored
commit a0f7d0f7 upstream. We toggle the state from start and stop callbacks but actually don't check it when the event triggers. Do it so that these callbacks actually work. Signed-off-by:
Frederic Weisbecker <fweisbec@gmail.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Stephane Eranian <eranian@google.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1299529629-18280-2-git-send-email-fweisbec@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Trond Myklebust authored
commit e020c680 upstream. This fixes a race in which the task->tk_callback() puts the rpc_task to sleep, setting a new callback. Under certain circumstances, the current code may end up executing the task->tk_action before it gets round to the callback. Signed-off-by:
Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Przemyslaw Bruski authored
commit efed5f26 upstream. Clear input settings before initialization. Signed-off-by:
Przemyslaw Bruski <pbruskispam@op.pl> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Przemyslaw Bruski authored
commit f164753a upstream. SDPIF status retrieval always returned the default settings instead of the actual ones. Signed-off-by:
Przemyslaw Bruski <pbruskispam@op.pl> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Przemyslaw Bruski authored
commit 4c1847e8 upstream. SPDIF status mask creation was incorrect. Signed-off-by:
Przemyslaw Bruski <pbruskispam@op.pl> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Ben Hutchings authored
commit 0f12a4e2 upstream. Commit 280c73d3 ("PCI: centralize the capabilities code in pci-sysfs.c") changed the initialisation of the "rom" and "vpd" attributes, and made the failure path for the "vpd" attribute incorrect. We must free the new attribute structure (attr), but instead we currently free dev->vpd->attr. That will normally be NULL, resulting in a memory leak, but it might be a stale pointer, resulting in a double-free. Found by inspection; compile-tested only. Signed-off-by:
Ben Hutchings <bhutchings@solarflare.com> Signed-off-by:
Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Jiri Slaby authored
commit 87e3dc38 upstream. Some broken BIOSes on ICH4 chipset report an ACPI region which is in conflict with legacy IDE ports when ACPI is disabled. Even though the regions overlap, IDE ports are working correctly (we cannot find out the decoding rules on chipsets). So the only problem is the reported region itself, if we don't reserve the region in the quirk everything works as expected. This patch avoids reserving any quirk regions below PCIBIOS_MIN_IO which is 0x1000. Some regions might be (and are by a fast google query) below this border, but the only difference is that they won't be reserved anymore. They should still work though the same as before. The conflicts look like (1f.0 is bridge, 1f.1 is IDE ctrl): pci 0000:00:1f.1: address space collision: [io 0x0170-0x0177] conflicts with 0000:00:1f.0 [io 0x0100-0x017f] At 0x0100 a 128 bytes long ACPI region is reported in the quirk for ICH4. ata_piix then fails to find disks because the IDE legacy ports are zeroed: ata_piix 0000:00:1f.1: device not available (can't reserve [io 0x0000-0x0007]) References: https://bugzilla.novell.com/show_bug.cgi?id=558740Signed-off-by:
Jiri Slaby <jslaby@suse.cz> Cc: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Thomas Renninger <trenn@suse.de> Signed-off-by:
Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Jiri Slaby authored
commit cdb97558 upstream. Per ICH4 and ICH6 specs, ACPI and GPIO regions are valid iff ACPI_EN and GPIO_EN bits are set to 1. Add checks for these bits into the quirks prior to the region creation. While at it, name the constants by macros. Signed-off-by:
Jiri Slaby <jslaby@suse.cz> Cc: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Thomas Renninger <trenn@suse.de> Signed-off-by:
Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Brandeburg, Jesse authored
commit b99af4b0 upstream. Revert commit 7eb93b17 Author: Yu Zhao <yu.zhao@intel.com> Date: Fri Apr 3 15:18:11 2009 +0800 PCI: SR-IOV quirk for Intel 82576 NIC If BIOS doesn't allocate resources for the SR-IOV BARs, zero the Flash BAR and program the SR-IOV BARs to use the old Flash Memory Space. Please refer to Intel 82576 Gigabit Ethernet Controller Datasheet section 7.9.2.14.2 for details. http://download.intel.com/design/network/datashts/82576_Datasheet.pdfSigned-off-by:
Yu Zhao <yu.zhao@intel.com> Signed-off-by:
Jesse Barnes <jbarnes@virtuousgeek.org> This quirk was added before SR-IOV was in production and now all machines that originally had this issue alreayd have bios updates to correct the issue. The quirk itself is no longer needed and in fact causes bugs if run. Remove it. Signed-off-by:
Jesse Brandeburg <jesse.brandeburg@intel.com> CC: Yu Zhao <yu.zhao@intel.com> CC: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by:
Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Vitaliy Kulikov authored
commit 094a4245 upstream. When the mux for digital mic is different from the mux for other mics, the current auto-parser doesn't handle them in a right way but provides only one mic. This patch fixes the issue. Signed-off-by:
Vitaliy Kulikov <Vitaliy.Kulikov@idt.com> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Sarah Sharp authored
commit 01a1fdb9 upstream. When an endpoint stalls, we need to update the xHCI host's internal dequeue pointer to move it past the stalled transfer. This includes updating the cycle bit (TRB ownership bit) if we have moved the dequeue pointer past a link TRB with the toggle cycle bit set. When we're trying to find the new dequeue segment, find_trb_seg() is supposed to keep track of whether we've passed any link TRBs with the toggle cycle bit set. However, this while loop's body while (cur_seg->trbs > trb || &cur_seg->trbs[TRBS_PER_SEGMENT - 1] < trb) { Will never get executed if the ring only contains one segment. find_trb_seg() will return immediately, without updating the new cycle bit. Since find_trb_seg() has no idea where in the segment the TD that stalled was, make the caller, xhci_find_new_dequeue_state(), check for this special case and update the cycle bit accordingly. This patch should be queued to kernels all the way back to 2.6.31. Signed-off-by:
Sarah Sharp <sarah.a.sharp@linux.intel.com> Tested-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
wangyanqing authored
commit d0781383 upstream. I picked up a new DAK-780EX(professional digitl reverb/mix system), which use CH341T chipset to communication with computer on 3/2011 and the CH341T's vendor code is 1a86 Looking up the CH341T's vendor and product id's I see: 1a86 QinHeng Electronics 5523 CH341 in serial mode, usb to serial port converter CH341T,CH341 are the products of the same company, maybe have some common hardware, and I test the ch341.c works well with CH341T Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Jiri Slaby authored
commit 6960f40a upstream. Make sure that we check the return value of tty_port_tty_get. Sometimes it may return NULL and we later dereference that. The only place here is in kobil_read_int_callback, so fix it. Signed-off-by:
Jiri Slaby <jslaby@suse.cz> Cc: Alan Cox <alan@linux.intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Senthil Balasubramanian authored
commit ac45c12d upstream. There are few places where we are checking for macversion and revsions before RTC is powered ON. However we are reading the macversion and revisions only after RTC is powered ON and so both macversion and revisions are actully zero and this leads to incorrect srev checks Incorrect srev checks can cause registers to be configured wrongly and can cause unexpected behavior. Fixing this seems to address the ASPM issue that we have observed. The laptop becomes very slow and hangs mostly with ASPM L1 enabled without this fix. fix this by reading the macversion and revisisons even before we start using them. There is no reason why should we delay reading this info until RTC is powered on as this is just a register information. Signed-off-by:
Senthil Balasubramanian <senthilkumar@atheros.com> Signed-off-by:
John W. Linville <linville@tuxdriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Andreas Herrmann authored
commit 1d3e09a3 upstream. Commit 7f74f8f2 (x86 quirk: Fix polarity for IRQ0 pin2 override on SB800 systems) introduced a regression. It removed some SB600 specific code to determine the revision ID without adapting a corresponding revision ID check for SB600. See this mail thread: http://marc.info/?l=linux-kernel&m=129980296006380&w=2 This patch adapts the corresponding check to cover all SB600 revisions. Tested-by:
Wang Lei <f3d27b@gmail.com> Signed-off-by:
Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Andrew Morton <akpm@linux-foundation.org> LKML-Reference: <20110315143137.GD29499@alberich.amd.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Sean Hefty authored
commit 29963437 upstream. When processing a SIDR REQ, the ib_cm allocates a new cm_id. The refcount of the cm_id is initialized to 1. However, cm_process_work will decrement the refcount after invoking all callbacks. The result is that the cm_id will end up with refcount set to 0 by the end of the sidr req handler. If a user tries to destroy the cm_id, the destruction will proceed, under the incorrect assumption that no other threads are referencing the cm_id. This can lead to a crash when the cm callback thread tries to access the cm_id. This problem was noticed as part of a larger investigation with kernel crashes in the rdma_cm when running on a real time OS. Signed-off-by:
Sean Hefty <sean.hefty@intel.com> Acked-by:
Doug Ledford <dledford@redhat.com> Signed-off-by:
Roland Dreier <roland@purestorage.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Sean Hefty authored
commit 25ae21a1 upstream. Doug Ledford and Red Hat reported a crash when running the rdma_cm on a real-time OS. The crash has the following call trace: cm_process_work cma_req_handler cma_disable_callback rdma_create_id kzalloc init_completion cma_get_net_info cma_save_net_info cma_any_addr cma_zero_addr rdma_translate_ip rdma_copy_addr cma_acquire_dev rdma_addr_get_sgid ib_find_cached_gid cma_attach_to_dev ucma_event_handler kzalloc ib_copy_ah_attr_to_user cma_comp [ preempted ] cma_write copy_from_user ucma_destroy_id copy_from_user _ucma_find_context ucma_put_ctx ucma_free_ctx rdma_destroy_id cma_exch cma_cancel_operation rdma_node_get_transport rt_mutex_slowunlock bad_area_nosemaphore oops_enter They were able to reproduce the crash multiple times with the following details: Crash seems to always happen on the: mutex_unlock(&conn_id->handler_mutex); as conn_id looks to have been freed during this code path. An examination of the code shows that a race exists in the request handlers. When a new connection request is received, the rdma_cm allocates a new connection identifier. This identifier has a single reference count on it. If a user calls rdma_destroy_id() from another thread after receiving a callback, rdma_destroy_id will proceed to destroy the id and free the associated memory. However, the request handlers may still be in the process of running. When control returns to the request handlers, they can attempt to access the newly created identifiers. Fix this by holding a reference on the newly created rdma_cm_id until the request handler is through accessing it. Signed-off-by:
Sean Hefty <sean.hefty@intel.com> Acked-by:
Doug Ledford <dledford@redhat.com> Signed-off-by:
Roland Dreier <roland@purestorage.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Seth Heasley authored
commit 64a3903d upstream. This patch adds an updated SATA RAID DeviceID for the Intel Patsburg PCH. Signed-off-by:
Seth Heasley <seth.heasley@intel.com> Signed-off-by:
Jeff Garzik <jgarzik@pobox.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Seth Heasley authored
commit a4a461a6 upstream. This patch adds the AHCI-mode SATA DeviceID for the Intel DH89xxCC PCH. Signed-off-by:
Seth Heasley <seth.heasley@intel.com> Signed-off-by:
Jeff Garzik <jgarzik@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Seth Heasley authored
commit 992b3fb9 upstream. This patch adds the Intel Patsburg (PCH) SATA AHCI and RAID Controller DeviceIDs. Signed-off-by:
Seth Heasley <seth.heasley@intel.com> Signed-off-by:
Jeff Garzik <jgarzik@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Kamal Mostafa authored
commit 9a6d44b9 upstream. Emit warning when "mem=nopentium" is specified on any arch other than x86_32 (the only that arch supports it). Signed-off-by:
Kamal Mostafa <kamal@canonical.com> BugLink: http://bugs.launchpad.net/bugs/553464 Cc: Yinghai Lu <yinghai@kernel.org> Cc: Len Brown <len.brown@intel.com> Cc: Rafael J. Wysocki <rjw@sisk.pl> LKML-Reference: <1296783486-23033-2-git-send-email-kamal@canonical.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Kamal Mostafa authored
commit 77eed821 upstream. Avoid removing all of memory and panicing when "mem={invalid}" is specified, e.g. mem=blahblah, mem=0, or mem=nopentium (on platforms other than x86_32). Signed-off-by:
Kamal Mostafa <kamal@canonical.com> BugLink: http://bugs.launchpad.net/bugs/553464 Cc: Yinghai Lu <yinghai@kernel.org> Cc: Len Brown <len.brown@intel.com> Cc: Rafael J. Wysocki <rjw@sisk.pl> LKML-Reference: <1296783486-23033-1-git-send-email-kamal@canonical.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Steven Rostedt authored
commit 868baf07 upstream. When the fuction graph tracer starts, it needs to make a special stack for each task to save the real return values of the tasks. All running tasks have this stack created, as well as any new tasks. On CPU hot plug, the new idle task will allocate a stack as well when init_idle() is called. The problem is that cpu hotplug does not create a new idle_task. Instead it uses the idle task that existed when the cpu went down. ftrace_graph_init_task() will add a new ret_stack to the task that is given to it. Because a clone will make the task have a stack of its parent it does not check if the task's ret_stack is already NULL or not. When the CPU hotplug code starts a CPU up again, it will allocate a new stack even though one already existed for it. The solution is to treat the idle_task specially. In fact, the function_graph code already does, just not at init_idle(). Instead of using the ftrace_graph_init_task() for the idle task, which that function expects the task to be a clone, have a separate ftrace_graph_init_idle_task(). Also, we will create a per_cpu ret_stack that is used by the idle task. When we call ftrace_graph_init_idle_task() it will check if the idle task's ret_stack is NULL, if it is, then it will assign it the per_cpu ret_stack. Reported-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Suggested-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Andrey Vagin authored
commit f8626854 upstream. mm_fault_error() should not execute oom-killer, if page fault occurs in kernel space. E.g. in copy_from_user()/copy_to_user(). This would happen if we find ourselves in OOM on a copy_to_user(), or a copy_from_user() which faults. Without this patch, the kernels hangs up in copy_from_user(), because OOM killer sends SIG_KILL to current process, but it can't handle a signal while in syscall, then the kernel returns to copy_from_user(), reexcute current command and provokes page_fault again. With this patch the kernel return -EFAULT from copy_from_user(). The code, which checks that page fault occurred in kernel space, has been copied from do_sigbus(). This situation is handled by the same way on powerpc, xtensa, tile, ... Signed-off-by:
Andrey Vagin <avagin@openvz.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <201103092322.p29NMNPH001682@imap1.linux-foundation.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Florian Fainelli authored
commit bf3a1eb8 upstream. When au1000_eth probes the MII bus for PHY address, if we do not set au1000_eth platform data's phy_search_highest_address, the MII probing logic will exit early and will assume a valid PHY is found at address 0. For MTX-1, the PHY is at address 31, and without this patch, the link detection/speed/duplex would not work correctly. Signed-off-by:
Florian Fainelli <florian@openwrt.org> To: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2111/Signed-off-by:
Ralf Baechle <ralf@linux-mips.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Tejun Heo authored
commit f08dc1ac upstream. ata_qc_complete() contains special handling for certain commands. For example, it schedules EH for device revalidation after certain configurations are changed. These shouldn't be applied to EH commands but they were. In most cases, it doesn't cause an actual problem because EH doesn't issue any command which would trigger special handling; however, ACPI can issue such commands via _GTF which can cause weird interactions. Restructure ata_qc_complete() such that EH commands are always passed on to __ata_qc_complete(). stable: Please apply to -stable only after 2.6.38 is released. Signed-off-by:
Tejun Heo <tj@kernel.org> Reported-by:
Kyle McMartin <kyle@mcmartin.ca> Signed-off-by:
Jeff Garzik <jgarzik@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Axel Lin authored
commit c804c733 upstream. Since 43cc71ee (platform: prefix MODALIAS with "platform:"), the platform modalias is prefixed with "platform:". Signed-off-by:
Axel Lin <axel.lin@gmail.com> Signed-off-by:
Artem Bityutskiy <Artem.Bityutskiy@nokia.com> Signed-off-by:
David Woodhouse <David.Woodhouse@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Hans de Goede authored
commit d9ebaa45 upstream. This avoids a possible race leading to trying to dereference NULL. Signed-off-by:
Hans de Goede <hdegoede@redhat.com> Acked-by:
Jean Delvare <khali@linux-fr.org> Signed-off-by:
Guenter Roeck <guenter.roeck@ericsson.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-