- 28 Dec, 2007 2 commits
-
-
Olof Johansson authored
By default the OpenPIC on PWRficient will bias to one core (since that will improve changes of the other core being able to stay idle/powered down). However, this conflicts with most irq load balancing schemes, since setting an interrupt to be delivered to either core doesn't really result in the load being shared. It also doesn't work well with the soft irq disable feature of PPC, since EE will stay on until the first interrupt is taken while soft disabled. Set the gconf0 config bit that enables even distribution of interrupts among the two cores. Signed-off-by: Olof Johansson <olof@lixom.net>
-
Olof Johansson authored
Some PWRficient-based boards have a NMI button that's wired up to a GPIO as interrupt source. By configuring the openpic accordingly, these get delivered as a machine check with high priority, instead of as an external interrupt. The device tree contains a property "nmi-source" in the openpic node for these systems, and it's the (hwirq) source for the input. Also, for these interrupts, the IACK is read from another register than the regular (MCACK instead), but they are EOI'd as usual. So implement said function for the mpic driver. Finally, move a couple of external function defines to include/ instead of local under sysdev. Being able to mask/unmask and eoi directly saves us from setting up a dummy irq handler that will never be called. Signed-off-by: Olof Johansson <olof@lixom.net>
-
- 21 Dec, 2007 34 commits
-
-
Paul Mackerras authored
-
Stephen Rothwell authored
Maple and pasemi both require PCI as does CONFIG_OF_PLATFORM_PCI. The default setting of CONFIG_ISA_DMA_API is set to match the protection around the relevant routines in asm/dma.h. I also had to remove the PMAC platform from the combined build. The precis is that to build a 64 bit kernel with no PCI, you can only include pSeries and iSeries. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Stephen Rothwell authored
Fixes this warning: arch/powerpc/platforms/powermac/pci.c: In function 'u3_ht_cfg_access': arch/powerpc/platforms/powermac/pci.c:354: warning: return discards qualifiers from pointer target type arch/powerpc/platforms/powermac/pci.c:358: warning: return discards qualifiers from pointer target type Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Stephen Rothwell authored
This will allow us to declare const all the statically declared arrrays of these. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Stephen Rothwell authored
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Stephen Rothwell authored
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Benjamin Herrenschmidt authored
The 32-bit PCI code tests if "bus" is non-NULL after calling pci_scan_bus_parented() in one place but not another before dereferencing it. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Benjamin Herrenschmidt authored
This fixes a few issues with via-pmu based backlight control. First, it fixes a sign problem with the setup of the backlight curve since the `range' value there -can- (and will) go negative. Then, it reworks the interaction between this and the via-pmu sleep code to properly restore backlight on wakeup from sleep. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Scott Wood authored
These hooks ensure that a decrementer interrupt is not pending when suspending; otherwise, problems may occur on 6xx/7xx/7xxx-based systems (except for powermacs, which use a separate suspend path). For example, with deep sleep on the 831x, a pending decrementer will cause a system freeze because the SoC thinks the decrementer interrupt would have woken the system, but the core must have interrupts disabled due to the setup required for deep sleep. Changed via-pmu.c to use the new ppc_md hooks, and made the arch_* functions call the generic_* functions unconditionally. -- paulus Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Jeremy Kerr authored
Based on an original patch from Arnd Bergmann <arnd.bergmann@de.ibm.com> If there's no entry in the mailbox, then a read on the _info file will return data from an uninitialised variable. This change returns EOF if there's no mailbox info available instead. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Andre Detsch authored
This fixes the behavior of spufs when a spu tries a DMA operation based on a wrong / unavailable address. Instead of just generating a SIGBUS signal, spufs now generates a SIGSEGV signal and restarts the problematic DMA operation after the execution of the application's signal handler. This allows applications to employ user-level paging systems. Although the restart_dma function is called before the application's signal handler, the operation is not actually performed at this time, since the spu context is already stopped. The operation only takes place when spu_run is restarted (which happens automatically). Signed-off-by: Andre Detsch <adetsch@br.ibm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Aegis Lin authored
The original spusched_timer was designed to take effect only when a context is waiting in the runqueue. This change adds an additional lower-freq timer has been added to purely handle the spu_load updates. The new timer will be triggered per LOAD_FREQ ticks. Signed-off-by: Aegis Lin <aegislin@gmail.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Christoph Hellwig authored
Make most places that use spu_acquire/spu_acquire_saved interruptible, this allows getting out of the spufs code when e.g. pressing ctrl+c. There are a few places where we get called e.g. from spufs teardown routines were we can't simply err out so these are left with a comment. For now I've also not touched the poll routines because it's open what libspe would expect in terms of interrupted system calls. Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Christoph Hellwig authored
The simple attr macros currently used by spufs can't deal with the handlers returning errors, which is required to make the state_mutex interruptible. This adds a local copy that allows for an error return from the get/set handlers. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Luke Browning authored
Change spufs_spu_run so that the context is queued directly to the scheduler and the controlling thread advances directly to spufs_wait() for spe errors and exceptions. nosched contexts are treated the same as before. Fixes from Christoph Hellwig <hch@lst.de> Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Masato Noguchi authored
This changes the spu context switch code to not write to reserved bits of spu interrupt status register. The architecture book says the reserved fields should be set to zero. Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Luke Browning authored
Need to re-check priority after dropping lock. Otherwise, a more favored context may be preempted. Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Luke Browning authored
This cleans up spu_run_init so that it does all of the spu initialization for spufs_run_spu. It initializes the spu context as much as possible before it activates the spu and writes the runcntl register. Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Jeremy Kerr authored
Based on original patches from Arnd Bergmann <arnd.bergman@de.ibm.com>; and Luke Browning <lukebr@linux.vnet.ibm.com> Currently, spu contexts need to be loaded to the SPU in order to take class 0 and class 1 exceptions. This change makes the actual interrupt-handlers much simpler (ie, set the exception information in the context save area), and defers the handling code to the spufs_handle_class[01] functions, called from spufs_run_spu. This should improve the concurrency of the spu scheduling leading to greater SPU utilization when SPUs are overcommited. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Jeremy Kerr authored
Add a few #defines for the class 0, 1 and 2 interrupt status bits, and use them instead of magic numbers when we're setting or checking for these interrupts. Also, add a #define for the class 2 mailbox threshold interrupt mask. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Jeremy Kerr authored
When doing a poll on the mbox stat file of a swapped-out context, we clear the class 0 interrupt status, rather than the class 2 interrupt status. This change corrects the poll operation to clear the correct interrupt. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Luke Browning authored
This change encapsulates the spu_privcntl_RW register so that it can be written through backing ops. This is necessary so that spu contexts can be initialized and queued to the scheduler in spufs_run_spu. Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Arnd Bergmann authored
This change disables the logic that faults-in spu contexts under the covers from the page fault handler. When a fault requires a runnable context, the handler will block until the context is scheduled by other means. Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Jeremy Kerr authored
Currently, part of the spufs code (switch.o, lscsa_alloc.o and fault.o) is compiled directly into the kernel. This change moves these components of spufs into the kernel. The lscsa and switch objects are fairly straightforward to move in. For the fault.o module, we split the fault-handling code into two parts: a/p/p/c/spu_fault.c and a/p/p/c/spufs/fault.c. The former is for the in-kernel spu_handle_mm_fault function, and we move the rest of the fault-handling code into spufs. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Julio M. Merino Vidal authored
Fix a few typos in the spufs scheduler comments Signed-off-by: Julio M. Merino Vidal <jmerino@ac.upc.edu> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Masato Noguchi authored
Add platform specific SPU run control routines to the spufs. The current spufs implementation uses the SPU master run control bit (MFC_SR1[S]) to control SPE execution, but the PS3 hypervisor does not support the use of this feature. This change adds the run control wrapper routies spu_enable_spu() and spu_disable_spu(). The bare metal routines use the master run control bit, and the PS3 specific routines use the priv2 run control register. An outstanding enhancement for the PS3 would be to add a guard to check for incorrect access to the spu problem state when the spu context is disabled. This check could be implemented with a flag added to the spu context that would inhibit mapping problem state pages, and a routine to unmap spu problem state pages. When the spu is enabled with ps3_enable_spu() the flag would be set allowing pages to be mapped, and when the spu is disabled with ps3_disable_spu() the flag would be cleared and mapped problem state pages would be unmapped. Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com> Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Emil Medve authored
When a module has relocation sections with tens of thousands of entries, counting the distinct/unique entries only (i.e. no duplicates) at load time can take tens of seconds and up to minutes. The sore point is the count_relocs() function which is called as part of the architecture specific module loading processing path: -> load_module() generic -> module_frob_arch_sections() arch specific -> get_plt_size() 32-bit -> get_stubs_size() 64-bit -> count_relocs() Here count_relocs is being called to find out how many distinct targets of R_PPC_REL24 relocations there are, since each distinct target needs a PLT entry or a stub created for it. The previous counting algorithm has O(n^2) complexity. Basically two solutions were proposed on the e-mail list: a hash based approach and a sort based approach. The hash based approach is the fastest (O(n)) but the has it needs additional memory and for certain corner cases it could take lots of memory due to the degeneration of the hash. One such proposal was submitted here: http://ozlabs.org/pipermail/linuxppc-dev/2007-June/037641.html The sort based approach is slower (O(n * log n + n)) but if the sorting is done "in place" it doesn't need additional memory. This has O(n + n * log n) complexity with no additional memory requirements. This commit implements the in-place sort option. Signed-off-by: Emil Medve <Emilian.Medve@Freescale.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86Linus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86: x86: intel_cacheinfo.c: cpu cache info entry for Intel Tolapai x86: fix die() to not be preemptible
-
git://oss.sgi.com:8090/xfs/xfs-2.6Linus Torvalds authored
* 'for-linus' of git://oss.sgi.com:8090/xfs/xfs-2.6: [XFS] Initialise current offset in xfs_file_readdir correctly [XFS] Fix mknod regression
-
Lachlan McIlroy authored
After reading the directory contents into the temporary buffer, we grab each dirent and pass it to filldir witht eh current offset of the dirent. The current offset was not being set for the first dirent in the temporary buffer, which coul dresult in bad offsets being set in the f_pos field result in looping and duplicate entries being returned from readdir. SGI-PV: 974905 SGI-Modid: xfs-linux-melb:xfs-kern:30282a Signed-off-by: David Chinner <dgc@sgi.com> Signed-off-by: Tim Shimmin <tes@sgi.com> Signed-off-by: Lachlan McIlroy <lachlan@sgi.com>
-
Christoph Hellwig authored
This was broken by my '[XFS] simplify xfs_create/mknod/symlink prototype', which assigned the re-shuffled ondisk dev_t back to the rdev variable in xfs_vn_mknod. Because of that i_rdev is set to the ondisk dev_t instead of the linux dev_t later down the function. Fortunately the fix for it is trivial: we can just remove the assignment because xfs_revalidate_inode has done the proper job before unlocking the inode. SGI-PV: 974873 SGI-Modid: xfs-linux-melb:xfs-kern:30273a Signed-off-by: Christoph Hellwig <hch@infradead.org> Signed-off-by: David Chinner <dgc@sgi.com> Signed-off-by: Lachlan McIlroy <lachlan@sgi.com>
-
Jason Gaston authored
This patch adds a cpu cache info entry for the Intel Tolapai cpu. Signed-off-by: Jason Gaston <jason.d.gaston@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Ingo Molnar authored
Andrew "Eagle Eye" Morton noticed that we use raw_local_save_flags() instead of raw_local_irq_save(flags) in die(). This allows the preemption of oopsing contexts - which is highly undesirable. It also causes CONFIG_DEBUG_PREEMPT to complain, as reported by Miles Lane. this bug was introduced via: commit 39743c9e Author: Andi Kleen <ak@suse.de> Date: Fri Oct 19 20:35:03 2007 +0200 x86: use raw locks during oopses - spin_lock_irqsave(&die.lock, flags); + __raw_spin_lock(&die.lock); + raw_local_save_flags(flags); that is not a correct open-coding of spin_lock_irqsave(): both the ordering is wrong (irqs should be disabled _first_), and the wrong flags-saving API was used. Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
- 20 Dec, 2007 4 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-schedLinus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched: debug: add end-of-oops marker sched: rt: account the cpu time during the tick
-
git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-2.6-dmLinus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-2.6-dm: dm crypt: use bio_add_page dm: merge max_hw_sector dm: trigger change uevent on rename dm crypt: fix write endio dm mpath: hp requires scsi dm: table detect io beyond device
-
Milan Broz authored
Fix possible max_phys_segments violation in cloned dm-crypt bio. In write operation dm-crypt needs to allocate new bio request and run crypto operation on this clone. Cloned request has always the same size, but number of physical segments can be increased and violate max_phys_segments restriction. This can lead to data corruption and serious hardware malfunction. This was observed when using XFS over dm-crypt and at least two HBA controller drivers (arcmsr, cciss) recently. Fix it by using bio_add_page() call (which tests for other restrictions too) instead of constructing own biovec. All versions of dm-crypt are affected by this bug. Cc: stable@kernel.org Cc: dm-crypt@saout.de Signed-off-by: Milan Broz <mbroz@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
-
Neil Brown authored
Make sure dm honours max_hw_sectors of underlying devices We still have no firm testing evidence in support of this patch but believe it may help to resolve some bug reports. - agk Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
-