- 24 Apr, 2007 11 commits
-
-
Olof Johansson authored
Oprofile support for PA6T, kernel side. Also rename the PA6T_SPRN.* defines to SPRN_PA6T.*. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Olof Johansson authored
Reset MPIC on boot to clear some timer state that firmware might leave configured. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Olof Johansson authored
Minor HID change. Firmware can't know that we want this set so we have to set it in the kernel. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Olof Johansson authored
Device 0 function 0 on the root bus is really a two-function bus agent, but only the first function is visible. Because of this, we need to allow config accesses into the second range. Modify the check for valid offsets accordingly. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Josh Boyer authored
PowerPC 750CL has high BATs. The patch below adds a CPU_FTRS_750CL that includes that. Without it, the original firmware mappings in the high BATs aren't cleared which continue to override the linux translations. It also adds CPU_FTR_COMMON to CPU_FTRS_750GX for completeness. Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
will schmidt authored
Fix a handful of comment typos for hvc_console. Signed-off-by: Will Schmidt <will_schmidt@vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Will Schmidt authored
Add a back-off mechanism to hvc_console's polling logic. This change drops the timers/second ratio from ~90 to ~1/2 while the console is idle. This change is most noticeable when watching /proc/timer_stats output. This only affects when the hvc_console is running in poll mode, i.e. power4 and cell systems. I've tested on Power4, Michael Ellerman has both contributed to the patch and tested on cell. Signed-off-by: Will Schmidt <will_schmidt@vnet.ibm.com> Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Acked-by: Linas Vepstas <linas@austin.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Stephen Rothwell authored
These got added recently. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Paul Mackerras authored
It turns out that commit e4805922 breaks some existing systems that use the via82cxxx driver. This reverts the change to via82cxxx.c. Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Paul Mackerras authored
Merge branch 'for-2.6.22' of master.kernel.org:/pub/scm/linux/kernel/git/arnd/cell-2.6 into for-2.6.22
-
Paul Mackerras authored
-
- 23 Apr, 2007 29 commits
-
-
Paul Mackerras authored
-
Arnd Bergmann authored
Sync with the Kconfig changes, and enable some options for celleb Cc: Ishizaki Kou <kou.ishizaki@toshiba.co.jp> Signed-off-by: Jens Osterkamp <jens@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Jeremy Kerr authored
Enable Periodic Recalibration (PTCAL) support for Cell XDR memory, using the new ibm,cbe-start-ptcal and ibm,cbe-stop-ptcal RTAS calls. Tested on QS20 and QS21 (by Thomas Huth). It seems that SLOF has problems disabling, at least on QS20; this patch should only be used once these problems have been addressed. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Christian Krafft authored
This patch adds support for a proper device-tree. A porper device-tree on cell contains be nodes for each CBE containg nodes for SPEs and all the other special devices on it. Ofcourse oldschool devicetree is still supported. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Christian Krafft authored
The of_iomap function maps memory for a given device_node and returns a pointer to that memory. This is used at some places, so it makes sense to a seperate function. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Christian Krafft authored
At the moment the pmi device driver is probing for devices with a given type and a given name. As there may be devices of the same type but with a different name, probing should be done also for device type only. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Christian Krafft authored
This patch adds a check for the private driver data to be initialized. The bug showed up, as the caller found a pmi device by it's type. Whereas the pmi driver probes for the type and the name. Since the name was not as the driver expected, it did not initialize. A more relaxed probing will be supplied with an extra patch, too. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Christian Krafft authored
The new PMI driver was added in order to support cpufreq on blades that require the frequency to be controlled by the service processor, so use it on those. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Christian Krafft authored
This patch adds some attributes the cpu and spu nodes: /sys/devices/system/[c|s]pu/[c|s]pu*/thermal/throttle_begin /sys/devices/system/[c|s]pu/[c|s]pu*/thermal/throttle_end /sys/devices/system/[c|s]pu/[c|s]pu*/thermal/throttle_full_stop Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Christian Krafft authored
This patch introduces a little function for transforming register values into temperature. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Christian Krafft authored
This patch adds code to deal with conversion of logical cpu to cbe nodes. It removes code that assummed there were two logical CPUs per CBE. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Jeremy Kerr authored
This change fixes the case where spu_base and spufs are initialised on a system with no SPEs - unconditionally create the spu_lists so spu_alloc doesn't explode, and check for spu_management ops before starting spufs. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> arch/powerpc/platforms/cell/spu_base.c | 7 ++++--- arch/powerpc/platforms/cell/spufs/inode.c | 5 +++++ 2 files changed, 9 insertions(+), 3 deletions(-)
-
Christoph Hellwig authored
spu_base.c is always built into the kernel image, so there is no need for a cleanup function. And some of the things it does are in the way for my following patches, so I'd rather get rid of it ASAP. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Christoph Hellwig authored
- remove the spu_acquire_runnable from spu_run_init. I need to opencode it in spufs_run_spu in the next patch - remove various inline attributes, we don't really want to inline long functions with multiple callsites - cleanup return values and runcntl_write calls in spu_run_init - use normal kernel codingstyle in spu_reacquire_runnable Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Akinobu Mita authored
spu_coredump_calls.owner is NULL in case of a builtin spufs, so the checks in here break. Check for the availability of the spu_coredump_calls variable instead. Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Arnd Bergmann authored
Dynamically allocated read/write buffer in spufs_arch_write_note() will not be freed. Convert it to get_free_page at the same time. Cc: Akinobu Mita <mita@fixstars.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Jeremy Kerr authored
Change the loop in spu_wait to be a little more straightforward. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Jeremy Kerr authored
Add a 'mode=' option to spufs mount arguments. This allows more control over access to the top-level spufs directory. Tested on Cell. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Akinobu Mita authored
GCC may generates inline copy loop to handle memcpy() function instead of kernel defined memcpy(). But this inlined version of memcpy() causes an alignment interrupt when copying from local store. This patch uses memcpy_fromio() and memcpy_toio to copy local store to prevent memcpy() being inlined. Signed-off-by: Akinobu Mita <mita@fixstars.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Christoph Hellwig authored
We now have proper locking around assignets of the mapping pointers, and the spin_unlock implies enough of a barrier to get rid of the explicit one. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Akinobu Mita authored
When SPU isolation mode enabled, isolated_loader would be allocated by spufs_init_isolated_loader() on module_init(). But anyone do not free it. This patch introduces spufs_exit_isolated_loader() which is the opposite of spufs_init_isolated_loader() and called on module_exit(). Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Akinobu Mita <mita@fixstars.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Akinobu Mita authored
spufs module_init forgot to call a few cleanup functions on error path. This patch also includes cosmetic changes in spu_sched_init() (identation fix and return error code). [modified by hch to apply ontop of the latest schedule changes] Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Akinobu Mita <mita@fixstars.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Akinobu Mita authored
This patch checks return value of spu_acquire_runnable() in spufs_mfc_write(). Signed-off-by: Akinobu Mita <mita@fixstars.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Christoph Hellwig authored
There is no reason for run_sema to be a struct semaphore. Changing it to a mutex and rename it accordingly. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Jeremy Kerr authored
This change populates a siginfo struct for SPE application exceptions (ie, invalid DMAs and illegal instructions). Tested on an IBM Cell Blade. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Arnd Bergmann authored
Until now, we have always entered the spu page fault handler with a mutex for the spu context held. This has multiple bad side-effects: - it becomes impossible to suspend the context during page faults - if an spu program attempts to access its own mmio areas through DMA, we get an immediate livelock when the nopage function tries to acquire the same mutex This patch makes the page fault logic operate on a struct spu_context instead of a struct spu, and moves it from spu_base.c to a new file fault.c inside of spufs. We now also need to copy the dar and dsisr contents of the last fault into the saved context to have it accessible in case we schedule out the context before activating the page fault handler. Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Christoph Hellwig authored
There is no reason to execute spu_init_channels under spu_mutex after the spu has been taken off the freelist it's ours. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Luke Browning authored
Addition to stop_wq needs to happen before adding to the runqeueue and under the same lock so that we don't have a race window for a lost wake up in the spu scheduler. Signed-off-by: Luke Browning <lukebrowning@us.ibm.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-
Christoph Hellwig authored
For quite a while now spu state is protected by a simple mutex instead of the old rw_semaphore, and this means we can simplify the locking around spu_setup_isolated a lot. Instead of doing an spu_release before entering spu_setup_isolated and then calling the complicated spu_acquire_exclusive we can now simply enter the function locked an in guaranteed runnable state, so that the only bit of spu_acquire_exclusive that's left is the call to spu_unmap_mappings. Similarly there's no more need to unlock and reacquire the state_mutex when spu_setup_isolated is done, but we can always return with the lock held and only drop it in spu_run_init in the failure case. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
-