Commit 2133514e authored by Paul Mackerras's avatar Paul Mackerras

Merge samba.org:/stuff/paulus/kernel/linux-2.5

into samba.org:/stuff/paulus/kernel/for-linus-ppc
parents 1f068b65 612344a8
......@@ -70,6 +70,8 @@ dnotify.txt
- info about directory notification in Linux.
driver-model.txt
- info about Linux driver model.
early-userspace/
- info about initramfs, klibc, and userspace early during boot.
exception.txt
- how Linux v2.2 handles exceptions without verify_area etc.
fb/
......
Early userspace support
=======================
Last update: 2003-08-21
"Early userspace" is a set of libraries and programs that provide
various pieces of functionality that are important enough to be
available while a Linux kernel is coming up, but that don't need to be
run inside the kernel itself.
It consists of several major infrastructure components:
- gen_init_cpio, a program that builds a cpio-format archive
containing a root filesystem image. This archive is compressed, and
the compressed image is linked into the kernel image.
- initramfs, a chunk of code that unpacks the compressed cpio image
midway through the kernel boot process.
- klibc, a userspace C library, currently packaged separately, that is
optimised for correctness and small size.
The cpio file format used by initramfs is the "newc" (aka "cpio -c")
format, and is documented in the file "buffer-format.txt". If you
want to generate your own cpio files directly instead of hacking on
gen_init_cpio, you will need to short-circuit the build process in
usr/ so that gen_init_cpio does not get run, then simply pop your own
initramfs_data.cpio.gz file into place.
Where's this all leading?
=========================
The klibc distribution contains some of the necessary software to make
early userspace useful. The klibc distribution is currently
maintained separately from the kernel, but this may change early in
the 2.7 era (it missed the boat for 2.5).
You can obtain somewhat infrequent snapshots of klibc from
ftp://ftp.kernel.org/pub/linux/libs/klibc/
For active users, you are better off using the klibc BitKeeper
repositories, at http://klibc.bkbits.net/
The standalone klibc distribution currently provides three components,
in addition to the klibc library:
- ipconfig, a program that configures network interfaces. It can
configure them statically, or use DHCP to obtain information
dynamically (aka "IP autoconfiguration").
- nfsmount, a program that can mount an NFS filesystem.
- kinit, the "glue" that uses ipconfig and nfsmount to replace the old
support for IP autoconfig, mount a filesystem over NFS, and continue
system boot using that filesystem as root.
kinit is built as a single statically linked binary to save space.
Eventually, several more chunks of kernel functionality will hopefully
move to early userspace:
- Almost all of init/do_mounts* (the beginning of this is already in
place)
- ACPI table parsing
- Insert unwieldy subsystem that doesn't really need to be in kernel
space here
If kinit doesn't meet your current needs and you've got bytes to burn,
the klibc distribution includes a small Bourne-compatible shell (ash)
and a number of other utilities, so you can replace kinit and build
custom initramfs images that meet your needs exactly.
For questions and help, you can sign up for the early userspace
mailing list at http://www.zytor.com/mailman/listinfo/klibc
Bryan O'Sullivan <bos@serpentine.com>
initramfs buffer format
-----------------------
Al Viro, H. Peter Anvin
Last revision: 2002-01-13
Starting with kernel 2.5.x, the old "initial ramdisk" protocol is
getting {replaced/complemented} with the new "initial ramfs"
(initramfs) protocol. The initramfs contents is passed using the same
memory buffer protocol used by the initrd protocol, but the contents
is different. The initramfs buffer contains an archive which is
expanded into a ramfs filesystem; this document details the format of
the initramfs buffer format.
The initramfs buffer format is based around the "newc" or "crc" CPIO
formats, and can be created with the cpio(1) utility. The cpio
archive can be compressed using gzip(1). One valid version of an
initramfs buffer is thus a single .cpio.gz file.
The full format of the initramfs buffer is defined by the following
grammar, where:
* is used to indicate "0 or more occurrences of"
(|) indicates alternatives
+ indicates concatenation
GZIP() indicates the gzip(1) of the operand
ALGN(n) means padding with null bytes to an n-byte boundary
initramfs := ("\0" | cpio_archive | cpio_gzip_archive)*
cpio_gzip_archive := GZIP(cpio_archive)
cpio_archive := cpio_file* + (<nothing> | cpio_trailer)
cpio_file := ALGN(4) + cpio_header + filename + "\0" + ALGN(4) + data
cpio_trailer := ALGN(4) + cpio_header + "TRAILER!!!\0" + ALGN(4)
In human terms, the initramfs buffer contains a collection of
compressed and/or uncompressed cpio archives (in the "newc" or "crc"
formats); arbitrary amounts zero bytes (for padding) can be added
between members.
The cpio "TRAILER!!!" entry (cpio end-of-archive) is optional, but is
not ignored; see "handling of hard links" below.
The structure of the cpio_header is as follows (all fields contain
hexadecimal ASCII numbers fully padded with '0' on the left to the
full width of the field, for example, the integer 4780 is represented
by the ASCII string "000012ac"):
Field name Field size Meaning
c_magic 6 bytes The string "070701" or "070702"
c_ino 8 bytes File inode number
c_mode 8 bytes File mode and permissions
c_uid 8 bytes File uid
c_gid 8 bytes File gid
c_nlink 8 bytes Number of links
c_mtime 8 bytes Modification time
c_filesize 8 bytes Size of data field
c_maj 8 bytes Major part of file device number
c_min 8 bytes Minor part of file device number
c_rmaj 8 bytes Major part of device node reference
c_rmin 8 bytes Minor part of device node reference
c_namesize 8 bytes Length of filename, including final \0
c_chksum 8 bytes Checksum of data field if c_magic is 070702;
otherwise zero
The c_mode field matches the contents of st_mode returned by stat(2)
on Linux, and encodes the file type and file permissions.
The c_filesize should be zero for any file which is not a regular file
or symlink.
The c_chksum field contains a simple 32-bit unsigned sum of all the
bytes in the data field. cpio(1) refers to this as "crc", which is
clearly incorrect (a cyclic redundancy check is a different and
significantly stronger integrity check), however, this is the
algorithm used.
If the filename is "TRAILER!!!" this is actually an end-of-archive
marker; the c_filesize for an end-of-archive marker must be zero.
*** Handling of hard links
When a nondirectory with c_nlink > 1 is seen, the (c_maj,c_min,c_ino)
tuple is looked up in a tuple buffer. If not found, it is entered in
the tuple buffer and the entry is created as usual; if found, a hard
link rather than a second copy of the file is created. It is not
necessary (but permitted) to include a second copy of the file
contents; if the file contents is not included, the c_filesize field
should be set to zero to indicate no data section follows. If data is
present, the previous instance of the file is overwritten; this allows
the data-carrying instance of a file to occur anywhere in the sequence
(GNU cpio is reported to attach the data to the last instance of a
file only.)
c_filesize must not be zero for a symlink.
When a "TRAILER!!!" end-of-archive marker is seen, the tuple buffer is
reset. This permits archives which are generated independently to be
concatenated.
To combine file data from different sources (without having to
regenerate the (c_maj,c_min,c_ino) fields), therefore, either one of
the following techniques can be used:
a) Separate the different file data sources with a "TRAILER!!!"
end-of-archive marker, or
b) Make sure c_nlink == 1 for all nondirectory entries.
Device Power Management
Device power management encompasses two areas - the ability to save
state and transition a device to a low-power state when the system is
entering a low-power state; and the ability to transition a device to
a low-power state while the system is running (and independently of
any other power management activity).
Methods
The methods to suspend and resume devices reside in struct bus_type:
struct bus_type {
...
int (*suspend)(struct device * dev, u32 state);
int (*resume)(struct device * dev);
};
Each bus driver is responsible implementing these methods, translating
the call into a bus-specific request and forwarding the call to the
bus-specific drivers. For example, PCI drivers implement suspend() and
resume() methods in struct pci_driver. The PCI core is simply
responsible for translating the pointers to PCI-specific ones and
calling the low-level driver.
This is done to a) ease transition to the new power management methods
and leverage the existing PM code in various bus drivers; b) allow
buses to implement generic and default PM routines for devices, and c)
make the flow of execution obvious to the reader.
System Power Management
When the system enters a low-power state, the device tree is walked in
a depth-first fashion to transition each device into a low-power
state. The ordering of the device tree is guaranteed by the order in
which devices get registered - children are never registered before
their ancestors, and devices are placed at the back of the list when
registered. By walking the list in reverse order, we are guaranteed to
suspend devices in the proper order.
Devices are suspended once with interrupts enabled. Drivers are
expected to stop I/O transactions, save device state, and place the
device into a low-power state. Drivers may sleep, allocate memory,
etc. at will.
Some devices are broken and will inevitably have problems powering
down or disabling themselves with interrupts enabled. For these
special cases, they may return -EAGAIN. This will put the device on a
list to be taken care of later. When interrupts are disabled, before
we enter the low-power state, their drivers are called again to put
their device to sleep.
On resume, the devices that returned -EAGAIN will be called to power
themselves back on with interrupts disabled. Once interrupts have been
re-enabled, the rest of the drivers will be called to resume their
devices. On resume, a driver is responsible for powering back on each
device, restoring state, and re-enabling I/O transactions for that
device.
System devices follow a slightly different API, which can be found in
include/linux/sysdev.h
drivers/base/sys.c
System devices will only be suspended with interrupts disabled, and
after all other devices have been suspended. On resume, they will be
resumed before any other devices, and also with interrupts disabled.
Runtime Power Management
Many devices are able to dynamically power down while the system is
still running. This feature is useful for devices that are not being
used, and can offer significant power savings on a running system.
In each device's directory, there is a 'power' directory, which
contains at least a 'state' file. Reading from this file displays what
power state the device is currently in. Writing to this file initiates
a transition to the specified power state, which must be a decimal in
the range 1-3, inclusive; or 0 for 'On'.
The PM core will call the ->suspend() method in the bus_type object
that the device belongs to if the specified state is not 0, or
->resume() if it is.
Nothing will happen if the specified state is the same state the
device is currently in.
If the device is already in a low-power state, and the specified state
is another, but different, low-power state, the ->resume() method will
first be called to power the device back on, then ->suspend() will be
called again with the new state.
The driver is responsible for saving the working state of the device
and putting it into the low-power state specified. If this was
successful, it returns 0, and the device's power_state field is
updated.
The driver must take care to know whether or not it is able to
properly resume the device, including all step of reinitialization
necessary. (This is the hardest part, and the one most protected by
NDA'd documents).
The driver must also take care not to suspend a device that is
currently in use. It is their responsibility to provide their own
exclusion mechanisms.
The runtime power transition happens with interrupts enabled. If a
device cannot support being powered down with interrupts, it may
return -EAGAIN (as it would during a system power management
transition), but it will _not_ be called again, and the transaction
will fail.
There is currently no way to know what states a device or driver
supports a priori. This will change in the future.
Driver Detach Power Management
The kernel now supports the ability to place a device in a low-power
state when it is detached from its driver, which happens when its
module is removed.
Each device contains a 'detach_state' file in its sysfs directory
which can be used to control this state. Reading from this file
displays what the current detach state is set to. This is 0 (On) by
default. A user may write a positive integer value to this file in the
range of 1-4 inclusive.
A value of 1-3 will indicate the device should be placed in that
low-power state, which will cause ->suspend() to be called for that
device. A value of 4 indicates that the device should be shutdown, so
->shutdown() will be called for that device.
The driver is responsible for reinitializing the device when the
module is re-inserted during it's ->probe() (or equivalent) method.
The driver core will not call any extra functions when binding the
device to the driver.
Power Management Interface
The power management subsystem provides a unified sysfs interface to
userspace, regardless of what architecture or platform one is
running. The interface exists in /sys/power/ directory (assuming sysfs
is mounted at /sys).
/sys/power/state controls system power state. Reading from this file
returns what states are supported, which is hard-coded to 'standby'
(Power-On Suspend), 'mem' (Suspend-to-RAM), and 'disk'
(Suspend-to-Disk).
Writing to this file one of those strings causes the system to
transition into that state. Please see the file
Documentation/power/states.txt for a description of each of those
states.
/sys/power/disk controls the operating mode of the suspend-to-disk
mechanism. Suspend-to-disk can be handled in several ways. The
greatest distinction is who writes memory to disk - the firmware or
the kernel. If the firmware does it, we assume that it also handles
suspending the system.
If the kernel does it, then we have three options for putting the system
to sleep - using the platform driver (e.g. ACPI or other PM
registers), powering off the system or rebooting the system (for
testing). The system will support either 'firmware' or 'platform', and
that is known a priori. But, the user may choose 'shutdown' or
'reboot' as alternatives.
Reading from this file will display what the mode is currently set
to. Writing to this file will accept one of
'firmware'
'platform'
'shutdown'
'reboot'
It will only change to 'firmware' or 'platform' if the system supports
it.
System Power Management States
The kernel supports three power management states generically, though
each is dependent on platform support code to implement the low-level
details for each state. This file describes each state, what they are
commonly called, what ACPI state they map to, and what string to write
to /sys/power/state to enter that state
State: Standby / Power-On Suspend
ACPI State: S1
String: "standby"
This state offers minimal, though real, power savings, while providing
a very low-latency transition back to a working system. No operating
state is lost (the CPU retains power), so the system easily starts up
again where it left off.
We try to put devices in a low-power state equivalent to D1, which
also offers low power savings, but low resume latency. Not all devices
support D1, and those that don't are left on.
A transition from Standby to the On state should take about 1-2
seconds.
State: Suspend-to-RAM
ACPI State: S3
String: "mem"
This state offers significant power savings as everything in the
system is put into a low-power state, except for memory, which is
placed in self-refresh mode to retain its contents.
System and device state is saved and kept in memory. All devices are
suspended and put into D3. In many cases, all peripheral buses lose
power when entering STR, so devices must be able to handle the
transition back to the On state.
For at least ACPI, STR requires some minimal boot-strapping code to
resume the system from STR. This may be true on other platforms.
A transition from Suspend-to-RAM to the On state should take about
3-5 seconds.
State: Suspend-to-disk
ACPI State: S4
String: "disk"
This state offers the greatest power savings, and can be used even in
the absence of low-level platform support for power management. This
state operates similarly to Suspend-to-RAM, but includes a final step
of writing memory contents to disk. On resume, this is read and memory
is restored to its pre-suspend state.
STD can be handled by the firmware or the kernel. If it is handled by
the firmware, it usually requires a dedicated partition that must be
setup via another operating system for it to use. Despite the
inconvenience, this method requires minimal work by the kernel, since
the firmware will also handle restoring memory contents on resume.
If the kernel is responsible for persistantly saving state, a mechanism
called 'swsusp' (Swap Suspend) is used to write memory contents to
free swap space. swsusp has some restrictive requirements, but should
work in most cases. Some, albeit outdated, documentation can be found
in Documentation/power/swsusp.txt.
Once memory state is written to disk, the system may either enter a
low-power state (like ACPI S4), or it may simply power down. Powering
down offers greater savings, and allows this mechanism to work on any
system. However, entering a real low-power state allows the user to
trigger wake up events (e.g. pressing a key or opening a laptop lid).
A transition from Suspend-to-Disk to the On state should take about 30
seconds, though it's typically a bit more with the current
implementation.
......@@ -363,6 +363,13 @@ L: linux-scsi@vger.kernel.org
W: http://www.dandelion.com/Linux/
S: Maintained
COMMON INTERNET FILE SYSTEM (CIFS)
P: Steve French
M: sfrench@samba.org
L: samba-technical@lists.samba.org
W: http://us1.samba.org/samba/Linux_CIFS_client.html
S: Supported
CIRRUS LOGIC GENERIC FBDEV DRIVER
P: Jeff Garzik
M: jgarzik@pobox.com
......@@ -2162,6 +2169,12 @@ W: http://www.ic.nec.co.jp/micro/uclinux/eng/
W: http://www.ee.nec.de/uclinux/
S: Supported
UCLINUX FOR RENESAS H8/300
P: Yoshinori Sato
M: ysato@users.sourceforge.jp
W: http://uclinux-h8.sourceforge.jp/
S: Supported
USB DIAMOND RIO500 DRIVER
P: Cesar Miquel
M: miquel@df.uba.ar
......
......@@ -190,6 +190,8 @@ endmenu
source "drivers/base/Kconfig"
source "drivers/mtd/Kconfig"
source "drivers/block/Kconfig"
source "drivers/ide/Kconfig"
......
......@@ -5,7 +5,7 @@
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
# (C) Copyright 2002, Yoshinori Sato <ysato@users.sourceforge.jp>
# (C) Copyright 2002,2003 Yoshinori Sato <ysato@users.sourceforge.jp>
#
ifndef include-config
-include $(TOPDIR)/.config
......@@ -37,8 +37,8 @@ CFLAGS += $(cflags-y)
CFLAGS += -mint32 -fno-builtin -Os
CFLAGS += -g
CFLAGS += -D__linux__
CFLAGS += -DUTS_SYSNAME=\"uClinux\" -DTARGET=$(BOARD)
AFLAGS += -DPLATFORM=$(PLATFORM) -DTARGET=$(BOARD) -DMODEL=$(MODEL) $(cflags-y)
CFLAGS += -DUTS_SYSNAME=\"uClinux\"
AFLAGS += -DPLATFORM=$(PLATFORM) -DMODEL=$(MODEL) $(cflags-y)
LDFLAGS += $(ldflags-y)
CROSS_COMPILE = h8300-elf-
......@@ -53,28 +53,32 @@ core-y += arch/$(ARCH)/kernel/ \
libs-y += arch/$(ARCH)/lib/ $(LIBGCC)
export MODEL
boot := arch/h8300/boot
export MODEL PLATFORM BOARD
archmrproper:
archclean:
$(call descend arch/$(ARCH), subdirclean)
$(Q)$(MAKE) $(clean)=$(boot)
prepare: include/asm-$(ARCH)/machine-depend.h include/asm-$(ARCH)/asm-offsets.h
prepare: include/asm-$(ARCH)/asm-offsets.h
include/asm-$(ARCH)/machine-depend.h: include/asm-$(ARCH)/$(BOARD)/machine-depend.h
$(Q)ln -sf $(BOARD)/machine-depend.h \
include/asm-$(ARCH)/machine-depend.h
@echo ' Create include/asm-$(ARCH)/machine-depend.h'
include/asm-$(ARCH)/asm-offsets.h: arch/$(ARCH)/kernel/asm-offsets.s \
include/asm include/linux/version.h
$(call filechk,gen-asm-offsets)
vmlinux.bin: vmlinux
$(OBJCOPY) -Obinary $< $@
vmlinux.srec: vmlinux
$(OBJCOPY) -Osrec $< $@
vmlinux.srec vmlinux.bin: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
define archhelp
echo 'vmlinux.bin - Create raw binary'
echo 'vmlinux.srec - Create srec binary'
endef
CLEAN_FILES += arch/$(ARCH)/vmlinux.bin arch/$(ARCH)/vmlinux.srec
CLEAN_FILES += include/asm-$(ARCH)/asm-offsets.h include/asm-$(ARCH)/machine-depend.h
# arch/h8300/boot/Makefile
targets := vmlinux.srec vmlinux.bin
OBJCOPYFLAGS_vmlinux.srec := -Osrec
OBJCOPYFLAGS_vmlinux.bin := -Obinary
$(obj)/vmlinux.srec $(obj)/vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy)
@echo ' Kernel: $@ is ready'
CLEAN_FILES += arch/$(ARCH)/vmlinux.bin arch/$(ARCH)/vmlinux.srec
......@@ -157,6 +157,7 @@ struct sigframe
#if defined(CONFIG_CPU_H8S)
short dummy_exr;
#endif
long dummy_pc;
char *pretcode;
unsigned char retcode[8];
unsigned long extramask[_NSIG_WORDS-1];
......@@ -170,6 +171,7 @@ struct rt_sigframe
#if defined(CONFIG_CPU_H8S)
short dummy_exr;
#endif
long dummy_pc;
char *pretcode;
unsigned char retcode[8];
struct siginfo info;
......@@ -241,7 +243,7 @@ rt_restore_ucontext(struct pt_regs *regs, struct ucontext *uc, int *pd0)
asmlinkage int do_sigreturn(unsigned long __unused,...)
{
struct pt_regs *regs = (struct pt_regs *) &__unused;
struct pt_regs *regs = (struct pt_regs *) (&__unused - 1);
unsigned long usp = rdusp();
struct sigframe *frame = (struct sigframe *)(usp - 4);
sigset_t set;
......@@ -416,7 +418,6 @@ static void setup_rt_frame (int sig, struct k_sigaction *ka, siginfo_t *info,
/* Set up to return from userspace. */
err |= __put_user(frame->retcode, &frame->pretcode);
/* moveq #,d0; notb d0; movea.l #,a5; trap #0 */
/* sub.l er0,er0; mov.b #__NR_rt_sigreturn,r0l; trapa #0 */
err != __put_user(0x1a80f800 + (__NR_rt_sigreturn & 0xff),
(long *)(frame->retcode + 0));
......
......@@ -275,14 +275,18 @@ SYMBOL_NAME_LABEL(sys_call_table)
.long SYMBOL_NAME(sys_ni_syscall) /* sys_remap_file_pages */
.long SYMBOL_NAME(sys_set_tid_address)
.long SYMBOL_NAME(sys_timer_create)
.long SYMBOL_NAME(sys_timer_settime) /* 260 */
.long SYMBOL_NAME(sys_timer_settime) /* 260 */
.long SYMBOL_NAME(sys_timer_gettime)
.long SYMBOL_NAME(sys_timer_getoverrun)
.long SYMBOL_NAME(sys_timer_delete)
.long SYMBOL_NAME(sys_clock_settime)
.long SYMBOL_NAME(sys_clock_gettime) /* 265 */
.long SYMBOL_NAME(sys_clock_gettime) /* 265 */
.long SYMBOL_NAME(sys_clock_getres)
.long SYMBOL_NAME(sys_clock_nanosleep)
.long SYMBOL_NAME(sys_statfs64)
.long SYMBOL_NAME(sys_fstatfs64)
.long SYMBOL_NAME(sys_tgkill) /* 270 */
.long SYMBOL_NAME(sys_utimes)
.rept NR_syscalls-(.-SYMBOL_NAME(sys_call_table))/4
.long SYMBOL_NAME(sys_ni_syscall)
......
......@@ -2,58 +2,62 @@
#ifdef CONFIG_H8300H_GENERIC
#ifdef CONFIG_ROMKERNEL
#include "platform/h8300h/generic/rom.ld"
#include "../platform/h8300h/generic/rom.ld"
#endif
#ifdef CONFIG_RAMKERNEL
#include "platform/h8300h/generic/ram.ld"
#include "../platform/h8300h/generic/ram.ld"
#endif
#endif
#ifdef CONFIG_H8300H_AKI3068NET
#ifdef CONFIG_ROMKERNEL
#include "platform/h8300h/aki3068net/rom.ld"
#include "../platform/h8300h/aki3068net/rom.ld"
#endif
#ifdef CONFIG_RAMKERNEL
#include "platform/h8300h/aki3068net/ram.ld"
#include "../platform/h8300h/aki3068net/ram.ld"
#endif
#endif
#ifdef CONFIG_H8300H_H8MAX
#ifdef CONFIG_ROMKERNEL
#include "platform/h8300h/h8max/rom.ld"
#include "../platform/h8300h/h8max/rom.ld"
#endif
#ifdef CONFIG_RAMKERNEL
#include "platform/h8300h/h8max/ram.ld"
#include "../platform/h8300h/h8max/ram.ld"
#endif
#endif
#ifdef CONFIG_H8300H_SIM
#ifdef CONFIG_ROMKERNEL
#include "platform/h8300h/generic/rom.ld"
#include "../platform/h8300h/generic/rom.ld"
#endif
#ifdef CONFIG_RAMKERNEL
#include "platform/h8300h/generic/ram.ld"
#include "../platform/h8300h/generic/ram.ld"
#endif
#endif
#ifdef CONFIG_H8S_SIM
#ifdef CONFIG_ROMKERNEL
#include "platform/h8s/generic/rom.ld"
#include "../platform/h8s/generic/rom.ld"
#endif
#ifdef CONFIG_RAMKERNEL
#include "platform/h8s/generic/ram.ld"
#include "../platform/h8s/generic/ram.ld"
#endif
#endif
#ifdef CONFIG_H8S_EDOSK2674
#ifdef CONFIG_ROMKERNEL
#include "platform/h8s/edosk2674/rom.ld"
#include "../platform/h8s/edosk2674/rom.ld"
#endif
#ifdef CONFIG_RAMKERNEL
#include "platform/h8s/edosk2674/ram.ld"
#include "../platform/h8s/edosk2674/ram.ld"
#endif
#endif
#if defined(CONFIG_H8300H_SIM) || defined(CONFIG_H8S_SIM)
INPUT(romfs.o)
#endif
_jiffies = _jiffies_64 + 4;
SECTIONS
......@@ -169,6 +173,10 @@ SECTIONS
__end = . ;
__ramstart = .;
} > ram
.romfs :
{
*(.romfs*)
} > ram
.dummy :
{
COMMAND_START = . - 0x200 ;
......
/* romfs move to __ebss */
#include <asm/linkage.h>
#if defined(__H8300H__)
.h8300h
#endif
#if defined(__H8300S__)
.h8300s
#endif
.text
.globl __move_romfs
_romfs_sig_len = 8
__move_romfs:
mov.l #__sbss,er0
mov.l #_romfs_sig,er1
mov.b #_romfs_sig_len,r3l
1: /* check romfs image */
mov.b @er0+,r2l
mov.b @er1+,r2h
cmp.b r2l,r2h
bne 2f
dec.b r3l
bne 1b
/* find romfs image */
mov.l @__sbss+8,er0 /* romfs length(be) */
mov.l #__sbss,er1
add.l er0,er1 /* romfs image end */
mov.l #__ebss,er2
add.l er0,er2 /* distination address */
adds #2,er0
adds #1,er0
shlr er0
shlr er0 /* transfer length */
1:
mov.l @er1,er3 /* copy image */
mov.l er3,@er2
subs #4,er1
subs #4,er2
dec.l #1,er0
bpl 1b
2:
rts
.section .rodata
_romfs_sig:
.ascii "-rom1fs-"
.end
......@@ -25,15 +25,11 @@
#define CMFA 6
extern int request_irq_boot(unsigned int,
irqreturn_t (*handler)(int, void *, struct pt_regs *),
unsigned long, const char *, void *);
void __init platform_timer_setup(irqreturn_t (*timer_int)(int, void *, struct pt_regs *))
{
outb(H8300_TIMER_COUNT_DATA,TCORA2);
outb(0x00,_8TCSR2);
request_irq_boot(40,timer_int,0,"timer",0);
request_irq(40,timer_int,0,"timer",0);
outb(0x40|0x08|0x03,_8TCR2);
}
......
......@@ -111,7 +111,7 @@ LRET = 38
mov.l er1,@(8:16,er0)
mov.l @sp+,er1
add.l #(LRET-LORIG),sp /* remove LORIG - LRET */
add.l #(LRET-LER1),sp /* remove LORIG - LRET */
mov.l sp,@SYMBOL_NAME(sw_ksp)
mov.l er0,sp
bra 8f
......@@ -255,6 +255,7 @@ SYMBOL_NAME_LABEL(ret_from_exception)
btst #TIF_NEED_RESCHED,r1l
bne @SYMBOL_NAME(reschedule):16
mov.l sp,er1
subs #4,er1 /* adjust retpc */
mov.l er2,er0
jsr @SYMBOL_NAME(do_signal)
3:
......
......@@ -44,14 +44,19 @@ SYMBOL_NAME_LABEL(_start)
/* copy .data */
#if !defined(CONFIG_H8300H_SIM)
/* copy .data */
mov.l #__begin_data,er5
mov.l #__sdata,er6
mov.l #__edata,er4
sub.l er6,er4
sub.l er6,er4
shlr.l er4
shlr.l er4
1:
eepmov.w
dec.w #1,e4
bpl 1b
mov.l @er5+,er0
mov.l er0,@er6
adds #4,er6
dec.l #1,er4
bne 1b
#endif
/* copy kernel commandline */
......
......@@ -25,15 +25,11 @@
#define CMFA 6
extern int request_irq_boot(unsigned int,
irqreturn_t (*handler)(int, void *, struct pt_regs *),
unsigned long, const char *, void *);
void __init platform_timer_setup(irqreturn_t (*timer_int)(int, void *, struct pt_regs *))
{
outb(H8300_TIMER_COUNT_DATA,TCORA2);
outb(0x00,_8TCSR2);
request_irq_boot(40,timer_int,0,"timer",0);
request_irq(40,timer_int,0,"timer",0);
outb(0x40|0x08|0x03,_8TCR2);
}
......
......@@ -52,7 +52,8 @@ typedef struct irq_handler {
const char *devname;
} irq_handler_t;
irq_handler_t *irq_list[NR_IRQS];
static irq_handler_t *irq_list[NR_IRQS];
static int use_kmalloc;
extern unsigned long *interrupt_redirect_table;
......@@ -119,20 +120,6 @@ void __init init_IRQ(void)
#endif
}
void __init request_irq_boot(unsigned int irq,
irqreturn_t (*handler)(int, void *, struct pt_regs *),
unsigned long flags, const char *devname, void *dev_id)
{
irq_handler_t *irq_handle;
irq_handle = alloc_bootmem(sizeof(irq_handler_t));
irq_handle->handler = handler;
irq_handle->flags = flags;
irq_handle->count = 0;
irq_handle->dev_id = dev_id;
irq_handle->devname = devname;
irq_list[irq] = irq_handle;
}
int request_irq(unsigned int irq,
irqreturn_t (*handler)(int, void *, struct pt_regs *),
unsigned long flags, const char *devname, void *dev_id)
......@@ -154,7 +141,14 @@ int request_irq(unsigned int irq,
return -EBUSY;
H8300_GPIO_DDR(H8300_GPIO_P9, (irq - EXT_IRQ0), 0);
}
irq_handle = (irq_handler_t *)kmalloc(sizeof(irq_handler_t), GFP_ATOMIC);
if (use_kmalloc)
irq_handle = (irq_handler_t *)kmalloc(sizeof(irq_handler_t), GFP_ATOMIC);
else {
irq_handle = alloc_bootmem(sizeof(irq_handler_t));
(unsigned long)irq_handle |= 0x80000000; /* bootmem allocater */
}
if (irq_handle == NULL)
return -ENOMEM;
......@@ -177,8 +171,10 @@ void free_irq(unsigned int irq, void *dev_id)
irq, irq_list[irq]->devname);
if (irq >= EXT_IRQ0 && irq <= EXT_IRQ5)
*(volatile unsigned char *)IER &= ~(1 << (irq - EXT_IRQ0));
kfree(irq_list[irq]);
irq_list[irq] = NULL;
if ((irq_list[irq] & 0x80000000) == 0) {
kfree(irq_list[irq]);
irq_list[irq] = NULL;
}
}
/*
......@@ -244,3 +240,9 @@ int show_interrupts(struct seq_file *p, void *v)
void init_irq_proc(void)
{
}
static void __init enable_kmalloc(void)
{
use_kmalloc = 1;
}
__initcall(enable_kmalloc);
......@@ -37,7 +37,8 @@
/* CPU Reset entry */
SYMBOL_NAME_LABEL(_start)
mov.l #RAMEND,sp
ldc #0x07,exr
ldc #0x80,ccr
ldc #0x00,exr
/* Peripheral Setup */
bclr #4,@INTCR:8 /* interrupt mode 2 */
......@@ -46,7 +47,7 @@ SYMBOL_NAME_LABEL(_start)
bset #1,@ISCRL+1:16 /* IRQ0 Positive Edge */
bclr #0,@ISCRL+1:16
#if defined(CONFIG_BLK_DEV_BLKMEM)
#if defined(CONFIG_MTD_UCLINUX)
/* move romfs image */
jsr @__move_romfs
#endif
......@@ -71,7 +72,7 @@ SYMBOL_NAME_LABEL(_start)
eepmov.w
/* uClinux kernel start */
ldc #0x10,ccr /* running kernel */
ldc #0x90,ccr /* running kernel */
mov.l #SYMBOL_NAME(init_thread_union),sp
add.l #0x2000,sp
jsr @_start_kernel
......
......@@ -26,10 +26,6 @@
#define REGS(regs) __REGS(regs)
#define __REGS(regs) #regs
extern int request_irq_boot(unsigned int,
irqreturn_t (*handler)(int, void *, struct pt_regs *),
unsigned long, const char *, void *);
int __init platform_timer_setup(irqreturn_t (*timer_int)(int, void *, struct pt_regs *))
{
unsigned char mstpcrl;
......@@ -38,7 +34,7 @@ int __init platform_timer_setup(irqreturn_t (*timer_int)(int, void *, struct pt_
outb(mstpcrl,MSTPCRL);
outb(H8300_TIMER_COUNT_DATA,_8TCORA1);
outb(0x00,_8TCSR1);
request_irq_boot(76,timer_int,0,"timer",0);
request_irq(76,timer_int,0,"timer",0);
outb(0x40|0x08|0x03,_8TCR1);
return 0;
}
......
......@@ -112,7 +112,7 @@ LRET = 40
mov.l er1,@(10:16,er0)
mov.l @sp+,er1
add.l #(LRET-LORIG),sp /* remove LORIG - LRET */
add.l #(LRET-LER1),sp /* remove LORIG - LRET */
mov.l sp,@SYMBOL_NAME(sw_ksp)
mov.l er0,sp
bra 8f
......@@ -252,6 +252,7 @@ SYMBOL_NAME_LABEL(ret_from_exception)
btst #TIF_NEED_RESCHED,r1l
bne @SYMBOL_NAME(reschedule):16
mov.l sp,er1
subs #4,er1 /* adjust retpc */
mov.l er2,er0
jsr @SYMBOL_NAME(do_signal)
3:
......
......@@ -37,13 +37,14 @@
/* CPU Reset entry */
SYMBOL_NAME_LABEL(_start)
mov.l #RAMEND,sp
ldc #0x07,exr
ldc #0x80,ccr
ldc #0x00,exr
/* Peripheral Setup */
bclr #4,@INTCR:8 /* interrupt mode 2 */
bset #5,@INTCR:8
#if defined(CONFIG_BLK_DEV_BLKMEM)
#if defined(CONFIG_MTD_UCLINUX)
/* move romfs image */
jsr @__move_romfs
#endif
......@@ -68,7 +69,7 @@ SYMBOL_NAME_LABEL(_start)
eepmov.w
/* uClinux kernel start */
ldc #0x10,ccr /* running kernel */
ldc #0x90,ccr /* running kernel */
mov.l #SYMBOL_NAME(init_thread_union),sp
add.l #0x2000,sp
jsr @_start_kernel
......
......@@ -33,36 +33,32 @@ SYMBOL_NAME_LABEL(_start)
/* Peripheral Setup */
/* .bss clear */
mov.l #__sbss,er5
mov.l er5,er6
inc.l #1,er6
mov.l #__ebss,er4
sub.l er5,er4
sub.w r0,r0
mov.b r0l,@er5
1:
eepmov.w
dec.w #1,e4
bpl 1b
/* copy .data */
#if !defined(CONFIG_H8S_SIM)
mov.l #__begin_data,er5
mov.l #__sdata,er6
mov.l #__edata,er4
sub.l er6,er4
sub.l er6,er4
shlr.l #2,er4
1:
eepmov.w
dec.w #1,e4
bpl 1b
mov.l @er5+,er0
mov.l er0,@er6
adds #4,er6
dec.l #1,er4
bne 1b
#endif
/* copy kernel commandline */
mov.l #COMMAND_START,er5
mov.l #SYMBOL_NAME(_command_line),er6
mov.w #512,r4
eepmov.w
/* .bss clear */
mov.l #__sbss,er5
mov.l #__ebss,er4
sub.l er5,er4
shlr.l #2,er4
sub.l er0,er0
1:
mov.l er0,@er5
adds #4,er5
dec.l #1,er4
bne 1b
/* linux kernel start */
ldc #0x90,ccr /* running kernel */
......
......@@ -6,6 +6,6 @@ MEMORY
vector : ORIGIN = 0x000000, LENGTH = 0x000200
rom : ORIGIN = 0x000200, LENGTH = 0x200000-0x000200
erom : ORIGIN = 0x200000, LENGTH = 0
ram : ORIGIN = 0x200000, LENGTH = 0x200000
eram : ORIGIN = 0x400000, LENGTH = 0
ram : ORIGIN = 0x200000, LENGTH = 0x400000
eram : ORIGIN = 0x600000, LENGTH = 0
}
......@@ -23,15 +23,11 @@
#include <asm/irq.h>
#include <asm/regs267x.h>
extern int request_irq_boot(unsigned int,
irqreturn_t (*handler)(int, void *, struct pt_regs *),
unsigned long, const char *, void *);
int platform_timer_setup(irqreturn_t (*timer_int)(int, void *, struct pt_regs *))
{
outb(H8300_TIMER_COUNT_DATA,_8TCORA1);
outb(0x00,_8TCSR1);
request_irq_boot(76,timer_int,0,"timer",0);
request_irq(76,timer_int,0,"timer",0);
outb(0x40|0x08|0x03,_8TCR1);
return 0;
}
......
......@@ -91,6 +91,8 @@ const static struct irq_pins irq_assign_table1[16]={
{H8300_GPIO_P2,H8300_GPIO_B6},{H8300_GPIO_P2,H8300_GPIO_B7},
};
static int use_kmalloc;
extern unsigned long *interrupt_redirect_table;
static inline unsigned long *get_vector_address(void)
......@@ -159,22 +161,6 @@ void __init init_IRQ(void)
#endif
}
/* special request_irq */
/* used bootmem allocater */
void __init request_irq_boot(unsigned int irq,
irqreturn_t (*handler)(int, void *, struct pt_regs *),
unsigned long flags, const char *devname, void *dev_id)
{
irq_handler_t *irq_handle;
irq_handle = alloc_bootmem(sizeof(irq_handler_t));
irq_handle->handler = handler;
irq_handle->flags = flags;
irq_handle->count = 0;
irq_handle->dev_id = dev_id;
irq_handle->devname = devname;
irq_list[irq] = irq_handle;
}
int request_irq(unsigned int irq,
irqreturn_t (*handler)(int, void *, struct pt_regs *),
unsigned long flags, const char *devname, void *dev_id)
......@@ -202,7 +188,14 @@ int request_irq(unsigned int irq,
H8300_GPIO_DDR(port_no, bit_no, H8300_GPIO_INPUT);
*(volatile unsigned short *)ISR &= ~ptn; /* ISR clear */
}
irq_handle = (irq_handler_t *)kmalloc(sizeof(irq_handler_t), GFP_ATOMIC);
if (use_kmalloc)
irq_handle = (irq_handler_t *)kmalloc(sizeof(irq_handler_t), GFP_ATOMIC);
else {
irq_handle = alloc_bootmem(sizeof(irq_handler_t));
(unsigned long)irq_handle |= 0x80000000; /* bootmem allocater */
}
if (irq_handle == NULL)
return -ENOMEM;
......@@ -243,8 +236,10 @@ void free_irq(unsigned int irq, void *dev_id)
}
H8300_GPIO_FREE(port_no, bit_no);
}
kfree(irq_list[irq]);
irq_list[irq] = NULL;
if (((unsigned long)irq_list[irq] & 0x80000000) == 0) {
kfree(irq_list[irq]);
irq_list[irq] = NULL;
}
}
unsigned long probe_irq_on (void)
......@@ -306,3 +301,10 @@ int show_interrupts(struct seq_file *p, void *v)
void init_irq_proc(void)
{
}
static int __init enable_kmalloc(void)
{
use_kmalloc = 1;
return 0;
}
__initcall(enable_kmalloc);
......@@ -27,6 +27,7 @@
#include <linux/config.h>
#include <linux/acpi.h>
#include <asm/pgalloc.h>
#include <asm/io_apic.h>
#include <asm/apic.h>
#include <asm/io.h>
#include <asm/mpspec.h>
......@@ -42,6 +43,9 @@
extern int acpi_disabled;
extern int acpi_ht;
int acpi_lapic = 0;
int acpi_ioapic = 0;
/* --------------------------------------------------------------------------
Boot-time Configuration
-------------------------------------------------------------------------- */
......@@ -91,8 +95,6 @@ char *__acpi_map_table(unsigned long phys, unsigned long size)
#ifdef CONFIG_X86_LOCAL_APIC
int acpi_lapic;
static u64 acpi_lapic_addr __initdata = APIC_DEFAULT_PHYS_BASE;
......@@ -159,8 +161,6 @@ acpi_parse_lapic_addr_ovr (
return 0;
}
#ifdef CONFIG_ACPI
static int __init
acpi_parse_lapic_nmi (
acpi_table_entry_header *header)
......@@ -179,15 +179,11 @@ acpi_parse_lapic_nmi (
return 0;
}
#endif /*CONFIG_ACPI*/
#endif /*CONFIG_X86_LOCAL_APIC*/
#ifdef CONFIG_X86_IO_APIC
int acpi_ioapic;
#ifdef CONFIG_ACPI
static int __init
acpi_parse_ioapic (
......@@ -249,7 +245,6 @@ acpi_parse_nmi_src (
return 0;
}
#endif /*CONFIG_ACPI*/
#endif /*CONFIG_X86_IO_APIC*/
......@@ -332,14 +327,12 @@ acpi_boot_init (void)
if (result)
return result;
#ifdef CONFIG_ACPI
result = acpi_blacklisted();
if (result) {
printk(KERN_WARNING PREFIX "BIOS listed in blacklist, disabling ACPI support\n");
acpi_disabled = 1;
return result;
}
#endif
#ifdef CONFIG_X86_LOCAL_APIC
......@@ -390,21 +383,18 @@ acpi_boot_init (void)
return result;
}
#ifdef CONFIG_ACPI
result = acpi_table_parse_madt(ACPI_MADT_LAPIC_NMI, acpi_parse_lapic_nmi);
if (result < 0) {
printk(KERN_ERR PREFIX "Error parsing LAPIC NMI entry\n");
/* TBD: Cleanup to allow fallback to MPS */
return result;
}
#endif /*CONFIG_ACPI*/
acpi_lapic = 1;
#endif /*CONFIG_X86_LOCAL_APIC*/
#ifdef CONFIG_X86_IO_APIC
#ifdef CONFIG_ACPI
/*
* I/O APIC
......@@ -424,7 +414,7 @@ acpi_boot_init (void)
/*
* if "noapic" boot option, don't look for IO-APICs
*/
if (skip_ioapic_setup) {
if (ioapic_setup_disabled()) {
printk(KERN_INFO PREFIX "Skipping IOAPIC probe "
"due to 'noapic' option.\n");
return 1;
......@@ -460,8 +450,6 @@ acpi_boot_init (void)
acpi_irq_model = ACPI_IRQ_MODEL_IOAPIC;
acpi_ioapic = 1;
#endif /*CONFIG_ACPI*/
#endif /*CONFIG_X86_IO_APIC*/
#ifdef CONFIG_X86_LOCAL_APIC
......
......@@ -1198,7 +1198,7 @@ static int suspend(int vetoable)
printk(KERN_CRIT "apm: suspend was vetoed, but suspending anyway.\n");
}
device_suspend(3, SUSPEND_POWER_DOWN);
device_suspend(3);
/* serialize with the timer interrupt */
write_seqlock_irq(&xtime_lock);
......@@ -1232,7 +1232,7 @@ static int suspend(int vetoable)
if (err != APM_SUCCESS)
apm_error("suspend", err);
err = (err == APM_SUCCESS) ? 0 : -EIO;
device_resume(RESUME_POWER_ON);
device_resume();
pm_send_all(PM_RESUME, (void *)0);
queue_event(APM_NORMAL_RESUME, NULL);
out:
......@@ -1346,7 +1346,7 @@ static void check_events(void)
write_seqlock_irq(&xtime_lock);
set_time();
write_sequnlock_irq(&xtime_lock);
device_resume(RESUME_POWER_ON);
device_resume();
pm_send_all(PM_RESUME, (void *)0);
queue_event(event, NULL);
}
......
......@@ -237,9 +237,12 @@ static void __init init_intel(struct cpuinfo_x86 *c)
c->x86_cache_size = l2 ? l2 : (l1i+l1d);
}
/* SEP CPUID bug: Pentium Pro reports SEP but doesn't have it */
if ( c->x86 == 6 && c->x86_model < 3 && c->x86_mask < 3 )
clear_bit(X86_FEATURE_SEP, c->x86_capability);
/* SEP CPUID bug: Pentium Pro reports SEP but doesn't have it until model 3 mask 3 */
if ( c->x86 == 6) {
unsigned model_mask = (c->x86_model << 8) + c->x86_mask;
if (model_mask < 0x0303)
clear_bit(X86_FEATURE_SEP, c->x86_capability);
}
/* Names for the Pentium II/Celeron processors
detectable only by also checking the cache size.
......
......@@ -574,7 +574,7 @@ static int mtrr_save(struct sys_device * sysdev, u32 state)
int i;
int size = num_var_ranges * sizeof(struct mtrr_value);
mtrr_state = kmalloc(size,GFP_KERNEL);
mtrr_state = kmalloc(size,GFP_ATOMIC);
if (mtrr_state)
memset(mtrr_state,0,size);
else
......@@ -607,8 +607,8 @@ static int mtrr_restore(struct sys_device * sysdev)
static struct sysdev_driver mtrr_sysdev_driver = {
.save = mtrr_save,
.restore = mtrr_restore,
.suspend = mtrr_save,
.resume = mtrr_restore,
};
......
......@@ -162,24 +162,6 @@ enum
static char *dmi_ident[DMI_STRING_MAX];
#ifdef CONFIG_ACPI_BOOT
/* print some information suitable for a blacklist entry. */
static void dmi_dump_system(void)
{
printk("DMI: BIOS: %.40s, %.40s, %.40s\n",
dmi_ident[DMI_BIOS_VENDOR], dmi_ident[DMI_BIOS_VERSION],
dmi_ident[DMI_BIOS_DATE]);
printk("DMI: System: %.40s, %.40s, %.40s\n",
dmi_ident[DMI_SYS_VENDOR], dmi_ident[DMI_PRODUCT_NAME],
dmi_ident[DMI_PRODUCT_VERSION]);
printk("DMI: Board: %.40s, %.40s, %.40s\n",
dmi_ident[DMI_BOARD_VENDOR], dmi_ident[DMI_BOARD_NAME],
dmi_ident[DMI_BOARD_VERSION]);
}
#endif
/*
* Save a DMI string
*/
......
......@@ -1013,7 +1013,6 @@ void __init mp_config_acpi_legacy_irqs (void)
panic("Max # of irq sources exceeded!\n");
}
}
#endif /* CONFIG_X86_IO_APIC */
#ifdef CONFIG_ACPI
......@@ -1150,5 +1149,5 @@ void __init mp_parse_prt (void)
}
#endif /*CONFIG_ACPI_PCI*/
#endif /* CONFIG_X86_IO_APIC */
#endif /*CONFIG_ACPI_BOOT*/
......@@ -546,9 +546,8 @@ static void __init parse_cmdline_early (char ** cmdline_p)
#ifdef CONFIG_X86_LOCAL_APIC
/* disable IO-APIC */
else if (!memcmp(from, "noapic", 6)) {
skip_ioapic_setup = 1;
}
else if (!memcmp(from, "noapic", 6))
disable_ioapic_setup();
#endif /* CONFIG_X86_LOCAL_APIC */
#endif /* CONFIG_ACPI_BOOT */
......@@ -1006,12 +1005,11 @@ void __init setup_arch(char **cmdline_p)
generic_apic_probe(*cmdline_p);
#endif
#ifdef CONFIG_ACPI_BOOT
/*
* Parse the ACPI tables for possible boot-time SMP configuration.
*/
(void) acpi_boot_init();
#endif
acpi_boot_init();
#ifdef CONFIG_X86_LOCAL_APIC
if (smp_found_config)
get_smp_config();
......
......@@ -12,6 +12,7 @@
#include <linux/smp.h>
#include <linux/oprofile.h>
#include <linux/sysdev.h>
#include <linux/slab.h>
#include <asm/nmi.h>
#include <asm/msr.h>
#include <asm/apic.h>
......@@ -91,24 +92,66 @@ static void nmi_save_registers(struct op_msrs * msrs)
{
unsigned int const nr_ctrs = model->num_counters;
unsigned int const nr_ctrls = model->num_controls;
struct op_msr_group * counters = &msrs->counters;
struct op_msr_group * controls = &msrs->controls;
struct op_msr * counters = msrs->counters;
struct op_msr * controls = msrs->controls;
unsigned int i;
for (i = 0; i < nr_ctrs; ++i) {
rdmsr(counters->addrs[i],
counters->saved[i].low,
counters->saved[i].high);
rdmsr(counters[i].addr,
counters[i].saved.low,
counters[i].saved.high);
}
for (i = 0; i < nr_ctrls; ++i) {
rdmsr(controls->addrs[i],
controls->saved[i].low,
controls->saved[i].high);
rdmsr(controls[i].addr,
controls[i].saved.low,
controls[i].saved.high);
}
}
static void free_msrs(void)
{
int i;
for (i = 0; i < NR_CPUS; ++i) {
kfree(cpu_msrs[i].counters);
cpu_msrs[i].counters = NULL;
kfree(cpu_msrs[i].controls);
cpu_msrs[i].controls = NULL;
}
}
static int allocate_msrs(void)
{
int success = 1;
size_t controls_size = sizeof(struct op_msr) * model->num_controls;
size_t counters_size = sizeof(struct op_msr) * model->num_counters;
int i;
for (i = 0; i < NR_CPUS; ++i) {
if (!cpu_online(i))
continue;
cpu_msrs[i].counters = kmalloc(counters_size, GFP_KERNEL);
if (!cpu_msrs[i].counters) {
success = 0;
break;
}
cpu_msrs[i].controls = kmalloc(controls_size, GFP_KERNEL);
if (!cpu_msrs[i].controls) {
success = 0;
break;
}
}
if (!success)
free_msrs();
return success;
}
static void nmi_cpu_setup(void * dummy)
{
int cpu = smp_processor_id();
......@@ -125,6 +168,9 @@ static void nmi_cpu_setup(void * dummy)
static int nmi_setup(void)
{
if (!allocate_msrs())
return -ENOMEM;
/* We walk a thin line between law and rape here.
* We need to be careful to install our NMI handler
* without actually triggering any NMIs as this will
......@@ -142,20 +188,20 @@ static void nmi_restore_registers(struct op_msrs * msrs)
{
unsigned int const nr_ctrs = model->num_counters;
unsigned int const nr_ctrls = model->num_controls;
struct op_msr_group * counters = &msrs->counters;
struct op_msr_group * controls = &msrs->controls;
struct op_msr * counters = msrs->counters;
struct op_msr * controls = msrs->controls;
unsigned int i;
for (i = 0; i < nr_ctrls; ++i) {
wrmsr(controls->addrs[i],
controls->saved[i].low,
controls->saved[i].high);
wrmsr(controls[i].addr,
controls[i].saved.low,
controls[i].saved.high);
}
for (i = 0; i < nr_ctrs; ++i) {
wrmsr(counters->addrs[i],
counters->saved[i].low,
counters->saved[i].high);
wrmsr(counters[i].addr,
counters[i].saved.low,
counters[i].saved.high);
}
}
......@@ -185,6 +231,7 @@ static void nmi_shutdown(void)
on_each_cpu(nmi_cpu_shutdown, NULL, 0, 1);
unset_nmi_callback();
enable_lapic_nmi_watchdog();
free_msrs();
}
......@@ -285,6 +332,9 @@ static int __init ppro_init(void)
{
__u8 cpu_model = current_cpu_data.x86_model;
if (cpu_model > 0xd)
return 0;
if (cpu_model > 5) {
nmi_ops.cpu_type = "i386/piii";
} else if (cpu_model > 2) {
......
......@@ -20,12 +20,12 @@
#define NUM_COUNTERS 4
#define NUM_CONTROLS 4
#define CTR_READ(l,h,msrs,c) do {rdmsr(msrs->counters.addrs[(c)], (l), (h));} while (0)
#define CTR_WRITE(l,msrs,c) do {wrmsr(msrs->counters.addrs[(c)], -(unsigned int)(l), -1);} while (0)
#define CTR_READ(l,h,msrs,c) do {rdmsr(msrs->counters[(c)].addr, (l), (h));} while (0)
#define CTR_WRITE(l,msrs,c) do {wrmsr(msrs->counters[(c)].addr, -(unsigned int)(l), -1);} while (0)
#define CTR_OVERFLOWED(n) (!((n) & (1U<<31)))
#define CTRL_READ(l,h,msrs,c) do {rdmsr(msrs->controls.addrs[(c)], (l), (h));} while (0)
#define CTRL_WRITE(l,h,msrs,c) do {wrmsr(msrs->controls.addrs[(c)], (l), (h));} while (0)
#define CTRL_READ(l,h,msrs,c) do {rdmsr(msrs->controls[(c)].addr, (l), (h));} while (0)
#define CTRL_WRITE(l,h,msrs,c) do {wrmsr(msrs->controls[(c)].addr, (l), (h));} while (0)
#define CTRL_SET_ACTIVE(n) (n |= (1<<22))
#define CTRL_SET_INACTIVE(n) (n &= ~(1<<22))
#define CTRL_CLEAR(x) (x &= (1<<21))
......@@ -39,15 +39,15 @@ static unsigned long reset_value[NUM_COUNTERS];
static void athlon_fill_in_addresses(struct op_msrs * const msrs)
{
msrs->counters.addrs[0] = MSR_K7_PERFCTR0;
msrs->counters.addrs[1] = MSR_K7_PERFCTR1;
msrs->counters.addrs[2] = MSR_K7_PERFCTR2;
msrs->counters.addrs[3] = MSR_K7_PERFCTR3;
msrs->controls.addrs[0] = MSR_K7_EVNTSEL0;
msrs->controls.addrs[1] = MSR_K7_EVNTSEL1;
msrs->controls.addrs[2] = MSR_K7_EVNTSEL2;
msrs->controls.addrs[3] = MSR_K7_EVNTSEL3;
msrs->counters[0].addr = MSR_K7_PERFCTR0;
msrs->counters[1].addr = MSR_K7_PERFCTR1;
msrs->counters[2].addr = MSR_K7_PERFCTR2;
msrs->counters[3].addr = MSR_K7_PERFCTR3;
msrs->controls[0].addr = MSR_K7_EVNTSEL0;
msrs->controls[1].addr = MSR_K7_EVNTSEL1;
msrs->controls[2].addr = MSR_K7_EVNTSEL2;
msrs->controls[3].addr = MSR_K7_EVNTSEL3;
}
......
......@@ -366,8 +366,8 @@ static struct p4_event_binding p4_events[NUM_EVENTS] = {
#define CCCR_SET_PMI_OVF_1(cccr) ((cccr) |= (1<<27))
#define CCCR_SET_ENABLE(cccr) ((cccr) |= (1<<12))
#define CCCR_SET_DISABLE(cccr) ((cccr) &= ~(1<<12))
#define CCCR_READ(low, high, i) do {rdmsr (p4_counters[(i)].cccr_address, (low), (high));} while (0)
#define CCCR_WRITE(low, high, i) do {wrmsr (p4_counters[(i)].cccr_address, (low), (high));} while (0)
#define CCCR_READ(low, high, i) do {rdmsr(p4_counters[(i)].cccr_address, (low), (high));} while (0)
#define CCCR_WRITE(low, high, i) do {wrmsr(p4_counters[(i)].cccr_address, (low), (high));} while (0)
#define CCCR_OVF_P(cccr) ((cccr) & (1U<<31))
#define CCCR_CLEAR_OVF(cccr) ((cccr) &= (~(1U<<31)))
......@@ -410,7 +410,7 @@ static void p4_fill_in_addresses(struct op_msrs * const msrs)
/* the counter registers we pay attention to */
for (i = 0; i < num_counters; ++i) {
msrs->counters.addrs[i] =
msrs->counters[i].addr =
p4_counters[VIRT_CTR(stag, i)].counter_address;
}
......@@ -419,42 +419,42 @@ static void p4_fill_in_addresses(struct op_msrs * const msrs)
/* 18 CCCR registers */
for (i = 0, addr = MSR_P4_BPU_CCCR0 + stag;
addr <= MSR_P4_IQ_CCCR5; ++i, addr += addr_increment()) {
msrs->controls.addrs[i] = addr;
msrs->controls[i].addr = addr;
}
/* 43 ESCR registers in three discontiguous group */
for (addr = MSR_P4_BSU_ESCR0 + stag;
addr <= MSR_P4_SSU_ESCR0; ++i, addr += addr_increment()) {
msrs->controls.addrs[i] = addr;
msrs->controls[i].addr = addr;
}
for (addr = MSR_P4_MS_ESCR0 + stag;
addr <= MSR_P4_TC_ESCR1; ++i, addr += addr_increment()) {
msrs->controls.addrs[i] = addr;
msrs->controls[i].addr = addr;
}
for (addr = MSR_P4_IX_ESCR0 + stag;
addr <= MSR_P4_CRU_ESCR3; ++i, addr += addr_increment()) {
msrs->controls.addrs[i] = addr;
msrs->controls[i].addr = addr;
}
/* there are 2 remaining non-contiguously located ESCRs */
if (num_counters == NUM_COUNTERS_NON_HT) {
/* standard non-HT CPUs handle both remaining ESCRs*/
msrs->controls.addrs[i++] = MSR_P4_CRU_ESCR5;
msrs->controls.addrs[i++] = MSR_P4_CRU_ESCR4;
msrs->controls[i++].addr = MSR_P4_CRU_ESCR5;
msrs->controls[i++].addr = MSR_P4_CRU_ESCR4;
} else if (stag == 0) {
/* HT CPUs give the first remainder to the even thread, as
the 32nd control register */
msrs->controls.addrs[i++] = MSR_P4_CRU_ESCR4;
msrs->controls[i++].addr = MSR_P4_CRU_ESCR4;
} else {
/* and two copies of the second to the odd thread,
for the 22st and 23nd control registers */
msrs->controls.addrs[i++] = MSR_P4_CRU_ESCR5;
msrs->controls.addrs[i++] = MSR_P4_CRU_ESCR5;
msrs->controls[i++].addr = MSR_P4_CRU_ESCR5;
msrs->controls[i++].addr = MSR_P4_CRU_ESCR5;
}
}
......
......@@ -20,12 +20,12 @@
#define NUM_COUNTERS 2
#define NUM_CONTROLS 2
#define CTR_READ(l,h,msrs,c) do {rdmsr(msrs->counters.addrs[(c)], (l), (h));} while (0)
#define CTR_WRITE(l,msrs,c) do {wrmsr(msrs->counters.addrs[(c)], -(u32)(l), -1);} while (0)
#define CTR_READ(l,h,msrs,c) do {rdmsr(msrs->counters[(c)].addr, (l), (h));} while (0)
#define CTR_WRITE(l,msrs,c) do {wrmsr(msrs->counters[(c)].addr, -(u32)(l), -1);} while (0)
#define CTR_OVERFLOWED(n) (!((n) & (1U<<31)))
#define CTRL_READ(l,h,msrs,c) do {rdmsr((msrs->controls.addrs[(c)]), (l), (h));} while (0)
#define CTRL_WRITE(l,h,msrs,c) do {wrmsr((msrs->controls.addrs[(c)]), (l), (h));} while (0)
#define CTRL_READ(l,h,msrs,c) do {rdmsr((msrs->controls[(c)].addr), (l), (h));} while (0)
#define CTRL_WRITE(l,h,msrs,c) do {wrmsr((msrs->controls[(c)].addr), (l), (h));} while (0)
#define CTRL_SET_ACTIVE(n) (n |= (1<<22))
#define CTRL_SET_INACTIVE(n) (n &= ~(1<<22))
#define CTRL_CLEAR(x) (x &= (1<<21))
......@@ -39,11 +39,11 @@ static unsigned long reset_value[NUM_COUNTERS];
static void ppro_fill_in_addresses(struct op_msrs * const msrs)
{
msrs->counters.addrs[0] = MSR_P6_PERFCTR0;
msrs->counters.addrs[1] = MSR_P6_PERFCTR1;
msrs->counters[0].addr = MSR_P6_PERFCTR0;
msrs->counters[1].addr = MSR_P6_PERFCTR1;
msrs->controls.addrs[0] = MSR_P6_EVNTSEL0;
msrs->controls.addrs[1] = MSR_P6_EVNTSEL1;
msrs->controls[0].addr = MSR_P6_EVNTSEL0;
msrs->controls[1].addr = MSR_P6_EVNTSEL1;
}
......
......@@ -11,22 +11,19 @@
#ifndef OP_X86_MODEL_H
#define OP_X86_MODEL_H
/* Pentium IV needs all these */
#define MAX_MSR 63
struct op_saved_msr {
unsigned int high;
unsigned int low;
};
struct op_msr_group {
unsigned int addrs[MAX_MSR];
struct op_saved_msr saved[MAX_MSR];
struct op_msr {
unsigned long addr;
struct op_saved_msr saved;
};
struct op_msrs {
struct op_msr_group counters;
struct op_msr_group controls;
struct op_msr * counters;
struct op_msr * controls;
};
struct pt_regs;
......
......@@ -66,7 +66,7 @@ drivers-$(CONFIG_IA64_HP_SIM) += arch/ia64/hp/sim/
drivers-$(CONFIG_IA64_HP_ZX1) += arch/ia64/hp/common/ arch/ia64/hp/zx1/
drivers-$(CONFIG_IA64_GENERIC) += arch/ia64/hp/common/ arch/ia64/hp/zx1/ arch/ia64/hp/sim/
boot := arch/ia64/boot
boot := arch/ia64/hp/sim/boot
.PHONY: boot compressed check
......
......@@ -7,7 +7,7 @@
# Copyright (C) Srinivasa Thirumalachar (sprasad@engr.sgi.com)
#
obj-y := hpsim_irq.o hpsim_setup.o
obj-y := hpsim_irq.o hpsim_setup.o hpsim.o
obj-$(CONFIG_IA64_GENERIC) += hpsim_machvec.o
obj-$(CONFIG_HP_SIMETH) += simeth.o
......
......@@ -5,7 +5,7 @@
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
# Copyright (C) 1998 by David Mosberger-Tang <davidm@hpl.hp.com>
# Copyright (C) 1998, 2003 by David Mosberger-Tang <davidm@hpl.hp.com>
#
targets-$(CONFIG_IA64_HP_SIM) += bootloader
......@@ -32,6 +32,6 @@ $(obj)/vmlinux.bin: vmlinux FORCE
LDFLAGS_bootloader = -static -T
$(obj)/bootloader: $(src)/bootloader.lds $(obj)/bootloader.o \
$(obj)/bootloader: $(src)/bootloader.lds $(obj)/bootloader.o $(obj)/boot_head.o $(obj)/fw-emu.o \
lib/lib.a arch/ia64/lib/lib.a FORCE
$(call if_changed,ld)
/*
* Copyright (C) 1998-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <asm/asmmacro.h>
.bss
.align 16
stack_mem:
.skip 16834
.text
/* This needs to be defined because lib/string.c:strlcat() calls it in case of error... */
GLOBAL_ENTRY(printk)
break 0
END(printk)
GLOBAL_ENTRY(_start)
.prologue
.save rp, r0
.body
movl gp = __gp
movl sp = stack_mem
bsw.1
br.call.sptk.many rp=start_bootloader
END(_start)
GLOBAL_ENTRY(ssc)
.regstk 5,0,0,0
mov r15=in4
break 0x80001
br.ret.sptk.many b0
END(ssc)
GLOBAL_ENTRY(jmp_to_kernel)
.regstk 2,0,0,0
mov r28=in0
mov b7=in1
br.sptk.few b7
END(jmp_to_kernel)
GLOBAL_ENTRY(pal_emulator_static)
mov r8=-1
mov r9=256
;;
cmp.gtu p6,p7=r9,r28 /* r28 <= 255? */
(p6) br.cond.sptk.few static
;;
mov r9=512
;;
cmp.gtu p6,p7=r9,r28
(p6) br.cond.sptk.few stacked
;;
static: cmp.eq p6,p7=6,r28 /* PAL_PTCE_INFO */
(p7) br.cond.sptk.few 1f
;;
mov r8=0 /* status = 0 */
movl r9=0x100000000 /* tc.base */
movl r10=0x0000000200000003 /* count[0], count[1] */
movl r11=0x1000000000002000 /* stride[0], stride[1] */
br.cond.sptk.few rp
1: cmp.eq p6,p7=14,r28 /* PAL_FREQ_RATIOS */
(p7) br.cond.sptk.few 1f
mov r8=0 /* status = 0 */
movl r9 =0x100000064 /* proc_ratio (1/100) */
movl r10=0x100000100 /* bus_ratio<<32 (1/256) */
movl r11=0x100000064 /* itc_ratio<<32 (1/100) */
;;
1: cmp.eq p6,p7=19,r28 /* PAL_RSE_INFO */
(p7) br.cond.sptk.few 1f
mov r8=0 /* status = 0 */
mov r9=96 /* num phys stacked */
mov r10=0 /* hints */
mov r11=0
br.cond.sptk.few rp
1: cmp.eq p6,p7=1,r28 /* PAL_CACHE_FLUSH */
(p7) br.cond.sptk.few 1f
mov r9=ar.lc
movl r8=524288 /* flush 512k million cache lines (16MB) */
;;
mov ar.lc=r8
movl r8=0xe000000000000000
;;
.loop: fc r8
add r8=32,r8
br.cloop.sptk.few .loop
sync.i
;;
srlz.i
;;
mov ar.lc=r9
mov r8=r0
;;
1: cmp.eq p6,p7=15,r28 /* PAL_PERF_MON_INFO */
(p7) br.cond.sptk.few 1f
mov r8=0 /* status = 0 */
movl r9 =0x12082004 /* generic=4 width=32 retired=8 cycles=18 */
mov r10=0 /* reserved */
mov r11=0 /* reserved */
mov r16=0xffff /* implemented PMC */
mov r17=0xffff /* implemented PMD */
add r18=8,r29 /* second index */
;;
st8 [r29]=r16,16 /* store implemented PMC */
st8 [r18]=r0,16 /* clear remaining bits */
;;
st8 [r29]=r0,16 /* store implemented PMC */
st8 [r18]=r0,16 /* clear remaining bits */
;;
st8 [r29]=r17,16 /* store implemented PMD */
st8 [r18]=r0,16 /* clear remaining bits */
mov r16=0xf0 /* cycles count capable PMC */
;;
st8 [r29]=r0,16 /* store implemented PMC */
st8 [r18]=r0,16 /* clear remaining bits */
mov r17=0x10 /* retired bundles capable PMC */
;;
st8 [r29]=r16,16 /* store cycles capable */
st8 [r18]=r0,16 /* clear remaining bits */
;;
st8 [r29]=r0,16 /* store implemented PMC */
st8 [r18]=r0,16 /* clear remaining bits */
;;
st8 [r29]=r17,16 /* store retired bundle capable */
st8 [r18]=r0,16 /* clear remaining bits */
;;
st8 [r29]=r0,16 /* store implemented PMC */
st8 [r18]=r0,16 /* clear remaining bits */
;;
1: br.cond.sptk.few rp
stacked:
br.ret.sptk.few rp
END(pal_emulator_static)
/*
* arch/ia64/boot/bootloader.c
* arch/ia64/hp/sim/boot/bootloader.c
*
* Loads an ELF kernel.
*
* Copyright (C) 1998-2002 Hewlett-Packard Co
* Copyright (C) 1998-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
* Stephane Eranian <eranian@hpl.hp.com>
*
......@@ -17,31 +17,13 @@ struct task_struct; /* forward declaration for elf.h */
#include <linux/kernel.h>
#include <asm/elf.h>
#include <asm/intrinsics.h>
#include <asm/pal.h>
#include <asm/pgtable.h>
#include <asm/sal.h>
#include <asm/system.h>
/* Simulator system calls: */
#define SSC_CONSOLE_INIT 20
#define SSC_GETCHAR 21
#define SSC_PUTCHAR 31
#define SSC_OPEN 50
#define SSC_CLOSE 51
#define SSC_READ 52
#define SSC_WRITE 53
#define SSC_GET_COMPLETION 54
#define SSC_WAIT_COMPLETION 55
#define SSC_CONNECT_INTERRUPT 58
#define SSC_GENERATE_INTERRUPT 59
#define SSC_SET_PERIODIC_INTERRUPT 60
#define SSC_GET_RTC 65
#define SSC_EXIT 66
#define SSC_LOAD_SYMBOLS 69
#define SSC_GET_TOD 74
#define SSC_GET_ARGS 75
#include "ssc.h"
struct disk_req {
unsigned long addr;
......@@ -53,10 +35,8 @@ struct disk_stat {
unsigned count;
};
#include "../kernel/fw-emu.c"
/* This needs to be defined because lib/string.c:strlcat() calls it in case of error... */
asm (".global printk; printk = 0");
extern void jmp_to_kernel (unsigned long bp, unsigned long e_entry);
extern struct ia64_boot_param *sys_fw_init (const char *args, int arglen);
/*
* Set a break point on this function so that symbols are available to set breakpoints in
......@@ -82,9 +62,8 @@ cons_write (const char *buf)
#define MAX_ARGS 32
void
_start (void)
start_bootloader (void)
{
static char stack[16384] __attribute__ ((aligned (16)));
static char mem[4096];
static char buffer[1024];
unsigned long off;
......@@ -98,10 +77,6 @@ _start (void)
char *kpath, *args;
long arglen = 0;
asm volatile ("movl gp=__gp;;" ::: "memory");
asm volatile ("mov sp=%0" :: "r"(stack) : "memory");
asm volatile ("bsw.1;;");
ssc(0, 0, 0, 0, SSC_CONSOLE_INIT);
/*
......@@ -195,15 +170,14 @@ _start (void)
cons_write("starting kernel...\n");
/* fake an I/O base address: */
asm volatile ("mov ar.k0=%0" :: "r"(0xffffc000000UL));
ia64_setreg(_IA64_REG_AR_KR0, 0xffffc000000UL);
bp = sys_fw_init(args, arglen);
ssc(0, (long) kpath, 0, 0, SSC_LOAD_SYMBOLS);
debug_break();
asm volatile ("mov sp=%2; mov r28=%1; br.sptk.few %0"
:: "b"(e_entry), "r"(bp), "r"(__pa(&stack)));
jmp_to_kernel((unsigned long) bp, e_entry);
cons_write("kernel returned!\n");
ssc(-1, 0, 0, 0, SSC_EXIT);
......
......@@ -3,9 +3,6 @@
*
* Copyright (C) 1998-2001 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*
* For the HP simulator, this file gets include in boot/bootloader.c.
* For SoftSDV, this file gets included in sys_softsdv.c.
*/
#include <linux/config.h>
......@@ -18,6 +15,8 @@
#include <asm/pal.h>
#include <asm/sal.h>
#include "ssc.h"
#define MB (1024*1024UL)
#define SIMPLE_MEMMAP 1
......@@ -37,27 +36,6 @@ static char fw_mem[( sizeof(struct ia64_boot_param)
+ NUM_MEM_DESCS*(sizeof(efi_memory_desc_t))
+ 1024)] __attribute__ ((aligned (8)));
#if defined(CONFIG_IA64_HP_SIM) || defined(CONFIG_IA64_GENERIC)
/* Simulator system calls: */
#define SSC_EXIT 66
/*
* Simulator system call.
*/
static long
ssc (long arg0, long arg1, long arg2, long arg3, int nr)
{
register long r8 asm ("r8");
asm volatile ("mov r15=%1\n\t"
"break 0x80001"
: "=r"(r8)
: "r"(nr), "r"(arg0), "r"(arg1), "r"(arg2), "r"(arg3));
return r8;
}
#define SECS_PER_HOUR (60 * 60)
#define SECS_PER_DAY (SECS_PER_HOUR * 24)
......@@ -119,109 +97,8 @@ offtime (unsigned long t, efi_time_t *tp)
return 1;
}
#endif /* CONFIG_IA64_HP_SIM */
/*
* Very ugly, but we need this in the simulator only. Once we run on
* real hw, this can all go away.
*/
extern void pal_emulator_static (void);
asm (
" .proc pal_emulator_static\n"
"pal_emulator_static:"
" mov r8=-1\n"
" mov r9=256\n"
" ;;\n"
" cmp.gtu p6,p7=r9,r28 /* r28 <= 255? */\n"
"(p6) br.cond.sptk.few static\n"
" ;;\n"
" mov r9=512\n"
" ;;\n"
" cmp.gtu p6,p7=r9,r28\n"
"(p6) br.cond.sptk.few stacked\n"
" ;;\n"
"static: cmp.eq p6,p7=6,r28 /* PAL_PTCE_INFO */\n"
"(p7) br.cond.sptk.few 1f\n"
" ;;\n"
" mov r8=0 /* status = 0 */\n"
" movl r9=0x100000000 /* tc.base */\n"
" movl r10=0x0000000200000003 /* count[0], count[1] */\n"
" movl r11=0x1000000000002000 /* stride[0], stride[1] */\n"
" br.cond.sptk.few rp\n"
"1: cmp.eq p6,p7=14,r28 /* PAL_FREQ_RATIOS */\n"
"(p7) br.cond.sptk.few 1f\n"
" mov r8=0 /* status = 0 */\n"
" movl r9 =0x100000064 /* proc_ratio (1/100) */\n"
" movl r10=0x100000100 /* bus_ratio<<32 (1/256) */\n"
" movl r11=0x100000064 /* itc_ratio<<32 (1/100) */\n"
" ;;\n"
"1: cmp.eq p6,p7=19,r28 /* PAL_RSE_INFO */\n"
"(p7) br.cond.sptk.few 1f\n"
" mov r8=0 /* status = 0 */\n"
" mov r9=96 /* num phys stacked */\n"
" mov r10=0 /* hints */\n"
" mov r11=0\n"
" br.cond.sptk.few rp\n"
"1: cmp.eq p6,p7=1,r28 /* PAL_CACHE_FLUSH */\n"
"(p7) br.cond.sptk.few 1f\n"
" mov r9=ar.lc\n"
" movl r8=524288 /* flush 512k million cache lines (16MB) */\n"
" ;;\n"
" mov ar.lc=r8\n"
" movl r8=0xe000000000000000\n"
" ;;\n"
".loop: fc r8\n"
" add r8=32,r8\n"
" br.cloop.sptk.few .loop\n"
" sync.i\n"
" ;;\n"
" srlz.i\n"
" ;;\n"
" mov ar.lc=r9\n"
" mov r8=r0\n"
" ;;\n"
"1: cmp.eq p6,p7=15,r28 /* PAL_PERF_MON_INFO */\n"
"(p7) br.cond.sptk.few 1f\n"
" mov r8=0 /* status = 0 */\n"
" movl r9 =0x12082004 /* generic=4 width=32 retired=8 cycles=18 */\n"
" mov r10=0 /* reserved */\n"
" mov r11=0 /* reserved */\n"
" mov r16=0xffff /* implemented PMC */\n"
" mov r17=0xffff /* implemented PMD */\n"
" add r18=8,r29 /* second index */\n"
" ;;\n"
" st8 [r29]=r16,16 /* store implemented PMC */\n"
" st8 [r18]=r0,16 /* clear remaining bits */\n"
" ;;\n"
" st8 [r29]=r0,16 /* store implemented PMC */\n"
" st8 [r18]=r0,16 /* clear remaining bits */\n"
" ;;\n"
" st8 [r29]=r17,16 /* store implemented PMD */\n"
" st8 [r18]=r0,16 /* clear remaining bits */\n"
" mov r16=0xf0 /* cycles count capable PMC */\n"
" ;;\n"
" st8 [r29]=r0,16 /* store implemented PMC */\n"
" st8 [r18]=r0,16 /* clear remaining bits */\n"
" mov r17=0x10 /* retired bundles capable PMC */\n"
" ;;\n"
" st8 [r29]=r16,16 /* store cycles capable */\n"
" st8 [r18]=r0,16 /* clear remaining bits */\n"
" ;;\n"
" st8 [r29]=r0,16 /* store implemented PMC */\n"
" st8 [r18]=r0,16 /* clear remaining bits */\n"
" ;;\n"
" st8 [r29]=r17,16 /* store retired bundle capable */\n"
" st8 [r18]=r0,16 /* clear remaining bits */\n"
" ;;\n"
" st8 [r29]=r0,16 /* store implemented PMC */\n"
" st8 [r18]=r0,16 /* clear remaining bits */\n"
" ;;\n"
"1: br.cond.sptk.few rp\n"
"stacked:\n"
" br.ret.sptk.few rp\n"
" .endp pal_emulator_static\n");
/* Macro to emulate SAL call using legacy IN and OUT calls to CF8, CFC etc.. */
#define BUILD_CMD(addr) ((0x80000000 | (addr)) & ~3)
......@@ -268,14 +145,14 @@ efi_unimplemented (void)
return EFI_UNSUPPORTED;
}
static long
static struct sal_ret_values
sal_emulator (long index, unsigned long in1, unsigned long in2,
unsigned long in3, unsigned long in4, unsigned long in5,
unsigned long in6, unsigned long in7)
{
register long r9 asm ("r9") = 0;
register long r10 asm ("r10") = 0;
register long r11 asm ("r11") = 0;
long r9 = 0;
long r10 = 0;
long r11 = 0;
long status;
/*
......@@ -357,8 +234,7 @@ sal_emulator (long index, unsigned long in1, unsigned long in2,
} else {
status = -1;
}
asm volatile ("" :: "r"(r9), "r"(r10), "r"(r11));
return status;
return ((struct sal_ret_values) {status, r9, r10, r11});
}
......@@ -427,7 +303,7 @@ sys_fw_init (const char *args, int arglen)
efi_systab->hdr.headersize = sizeof(efi_systab->hdr);
efi_systab->fw_vendor = __pa("H\0e\0w\0l\0e\0t\0t\0-\0P\0a\0c\0k\0a\0r\0d\0\0");
efi_systab->fw_revision = 1;
efi_systab->runtime = __pa(efi_runtime);
efi_systab->runtime = (void *) __pa(efi_runtime);
efi_systab->nr_tables = 1;
efi_systab->tables = __pa(efi_tables);
......
/*
* Copyright (C) 1998-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
* Stephane Eranian <eranian@hpl.hp.com>
*/
#ifndef ssc_h
#define ssc_h
/* Simulator system calls: */
#define SSC_CONSOLE_INIT 20
#define SSC_GETCHAR 21
#define SSC_PUTCHAR 31
#define SSC_OPEN 50
#define SSC_CLOSE 51
#define SSC_READ 52
#define SSC_WRITE 53
#define SSC_GET_COMPLETION 54
#define SSC_WAIT_COMPLETION 55
#define SSC_CONNECT_INTERRUPT 58
#define SSC_GENERATE_INTERRUPT 59
#define SSC_SET_PERIODIC_INTERRUPT 60
#define SSC_GET_RTC 65
#define SSC_EXIT 66
#define SSC_LOAD_SYMBOLS 69
#define SSC_GET_TOD 74
#define SSC_GET_ARGS 75
/*
* Simulator system call.
*/
extern long ssc (long arg0, long arg1, long arg2, long arg3, int nr);
#endif /* ssc_h */
#include <asm/asmmacro.h>
/*
* Simulator system call.
*/
GLOBAL_ENTRY(ia64_ssc)
mov r15=r36
break 0x80001
br.ret.sptk.many rp
END(ia64_ssc)
......@@ -25,19 +25,6 @@
#include "hpsim_ssc.h"
/*
* Simulator system call.
*/
asm (".text\n"
".align 32\n"
".global ia64_ssc\n"
".proc ia64_ssc\n"
"ia64_ssc:\n"
"mov r15=r36\n"
"break 0x80001\n"
"br.ret.sptk.many rp\n"
".endp\n");
void
ia64_ssc_connect_irq (long intr, long irq)
{
......
......@@ -24,6 +24,7 @@
#include <linux/wait.h>
#include <linux/compat.h>
#include <asm/intrinsics.h>
#include <asm/uaccess.h>
#include <asm/rse.h>
#include <asm/sigcontext.h>
......@@ -41,6 +42,11 @@
#define __IA32_NR_sigreturn 119
#define __IA32_NR_rt_sigreturn 173
#ifdef ASM_SUPPORTED
/*
* Don't let GCC uses f16-f31 so that save_ia32_fpstate_live() and
* restore_ia32_fpstate_live() can be sure the live register contain user-level state.
*/
register double f16 asm ("f16"); register double f17 asm ("f17");
register double f18 asm ("f18"); register double f19 asm ("f19");
register double f20 asm ("f20"); register double f21 asm ("f21");
......@@ -50,6 +56,7 @@ register double f24 asm ("f24"); register double f25 asm ("f25");
register double f26 asm ("f26"); register double f27 asm ("f27");
register double f28 asm ("f28"); register double f29 asm ("f29");
register double f30 asm ("f30"); register double f31 asm ("f31");
#endif
struct sigframe_ia32
{
......@@ -198,30 +205,6 @@ copy_siginfo_to_user32 (siginfo_t32 *to, siginfo_t *from)
* All other fields unused...
*/
#define __ldfe(regnum, x) \
({ \
register double __f__ asm ("f"#regnum); \
__asm__ __volatile__ ("ldfe %0=[%1] ;;" :"=f"(__f__): "r"(x)); \
})
#define __ldf8(regnum, x) \
({ \
register double __f__ asm ("f"#regnum); \
__asm__ __volatile__ ("ldf8 %0=[%1] ;;" :"=f"(__f__): "r"(x)); \
})
#define __stfe(x, regnum) \
({ \
register double __f__ asm ("f"#regnum); \
__asm__ __volatile__ ("stfe [%0]=%1" :: "r"(x), "f"(__f__) : "memory"); \
})
#define __stf8(x, regnum) \
({ \
register double __f__ asm ("f"#regnum); \
__asm__ __volatile__ ("stf8 [%0]=%1" :: "r"(x), "f"(__f__) : "memory"); \
})
static int
save_ia32_fpstate_live (struct _fpstate_ia32 *save)
{
......@@ -238,18 +221,19 @@ save_ia32_fpstate_live (struct _fpstate_ia32 *save)
if (!access_ok(VERIFY_WRITE, save, sizeof(*save)))
return -EFAULT;
/* Readin fsr, fcr, fir, fdr and copy onto fpstate */
asm volatile ( "mov %0=ar.fsr;" : "=r"(fsr));
asm volatile ( "mov %0=ar.fcr;" : "=r"(fcr));
asm volatile ( "mov %0=ar.fir;" : "=r"(fir));
asm volatile ( "mov %0=ar.fdr;" : "=r"(fdr));
/* Read in fsr, fcr, fir, fdr and copy onto fpstate */
fsr = ia64_getreg(_IA64_REG_AR_FSR);
fcr = ia64_getreg(_IA64_REG_AR_FCR);
fir = ia64_getreg(_IA64_REG_AR_FIR);
fdr = ia64_getreg(_IA64_REG_AR_FDR);
/*
* We need to clear the exception state before calling the signal handler. Clear
* the bits 15, bits 0-7 in fp status word. Similar to the functionality of fnclex
* instruction.
*/
new_fsr = fsr & ~0x80ff;
asm volatile ( "mov ar.fsr=%0;" :: "r"(new_fsr));
ia64_setreg(_IA64_REG_AR_FSR, new_fsr);
__put_user(fcr & 0xffff, &save->cw);
__put_user(fsr & 0xffff, &save->sw);
......@@ -286,45 +270,45 @@ save_ia32_fpstate_live (struct _fpstate_ia32 *save)
ia64f2ia32f(fpregp, &ptp->f11);
copy_to_user(&save->_st[(3+fr8_st_map)&0x7], fpregp, sizeof(struct _fpreg_ia32));
__stfe(fpregp, 12);
ia64_stfe(fpregp, 12);
copy_to_user(&save->_st[(4+fr8_st_map)&0x7], fpregp, sizeof(struct _fpreg_ia32));
__stfe(fpregp, 13);
ia64_stfe(fpregp, 13);
copy_to_user(&save->_st[(5+fr8_st_map)&0x7], fpregp, sizeof(struct _fpreg_ia32));
__stfe(fpregp, 14);
ia64_stfe(fpregp, 14);
copy_to_user(&save->_st[(6+fr8_st_map)&0x7], fpregp, sizeof(struct _fpreg_ia32));
__stfe(fpregp, 15);
ia64_stfe(fpregp, 15);
copy_to_user(&save->_st[(7+fr8_st_map)&0x7], fpregp, sizeof(struct _fpreg_ia32));
__stf8(&num128[0], 16);
__stf8(&num128[1], 17);
ia64_stf8(&num128[0], 16);
ia64_stf8(&num128[1], 17);
copy_to_user(&save->_xmm[0], num128, sizeof(struct _xmmreg_ia32));
__stf8(&num128[0], 18);
__stf8(&num128[1], 19);
ia64_stf8(&num128[0], 18);
ia64_stf8(&num128[1], 19);
copy_to_user(&save->_xmm[1], num128, sizeof(struct _xmmreg_ia32));
__stf8(&num128[0], 20);
__stf8(&num128[1], 21);
ia64_stf8(&num128[0], 20);
ia64_stf8(&num128[1], 21);
copy_to_user(&save->_xmm[2], num128, sizeof(struct _xmmreg_ia32));
__stf8(&num128[0], 22);
__stf8(&num128[1], 23);
ia64_stf8(&num128[0], 22);
ia64_stf8(&num128[1], 23);
copy_to_user(&save->_xmm[3], num128, sizeof(struct _xmmreg_ia32));
__stf8(&num128[0], 24);
__stf8(&num128[1], 25);
ia64_stf8(&num128[0], 24);
ia64_stf8(&num128[1], 25);
copy_to_user(&save->_xmm[4], num128, sizeof(struct _xmmreg_ia32));
__stf8(&num128[0], 26);
__stf8(&num128[1], 27);
ia64_stf8(&num128[0], 26);
ia64_stf8(&num128[1], 27);
copy_to_user(&save->_xmm[5], num128, sizeof(struct _xmmreg_ia32));
__stf8(&num128[0], 28);
__stf8(&num128[1], 29);
ia64_stf8(&num128[0], 28);
ia64_stf8(&num128[1], 29);
copy_to_user(&save->_xmm[6], num128, sizeof(struct _xmmreg_ia32));
__stf8(&num128[0], 30);
__stf8(&num128[1], 31);
ia64_stf8(&num128[0], 30);
ia64_stf8(&num128[1], 31);
copy_to_user(&save->_xmm[7], num128, sizeof(struct _xmmreg_ia32));
return 0;
}
......@@ -354,10 +338,10 @@ restore_ia32_fpstate_live (struct _fpstate_ia32 *save)
* should remain same while writing.
* So, we do a read, change specific fields and write.
*/
asm volatile ( "mov %0=ar.fsr;" : "=r"(fsr));
asm volatile ( "mov %0=ar.fcr;" : "=r"(fcr));
asm volatile ( "mov %0=ar.fir;" : "=r"(fir));
asm volatile ( "mov %0=ar.fdr;" : "=r"(fdr));
fsr = ia64_getreg(_IA64_REG_AR_FSR);
fcr = ia64_getreg(_IA64_REG_AR_FCR);
fir = ia64_getreg(_IA64_REG_AR_FIR);
fdr = ia64_getreg(_IA64_REG_AR_FDR);
__get_user(mxcsr, (unsigned int *)&save->mxcsr);
/* setting bits 0..5 8..12 with cw and 39..47 from mxcsr */
......@@ -391,10 +375,10 @@ restore_ia32_fpstate_live (struct _fpstate_ia32 *save)
num64 = (num64 << 32) | lo;
fdr = (fdr & (~0xffffffffffff)) | num64;
asm volatile ( "mov ar.fsr=%0;" :: "r"(fsr));
asm volatile ( "mov ar.fcr=%0;" :: "r"(fcr));
asm volatile ( "mov ar.fir=%0;" :: "r"(fir));
asm volatile ( "mov ar.fdr=%0;" :: "r"(fdr));
ia64_setreg(_IA64_REG_AR_FSR, fsr);
ia64_setreg(_IA64_REG_AR_FCR, fcr);
ia64_setreg(_IA64_REG_AR_FIR, fir);
ia64_setreg(_IA64_REG_AR_FDR, fdr);
/*
* restore f8..f11 onto pt_regs
......@@ -420,45 +404,45 @@ restore_ia32_fpstate_live (struct _fpstate_ia32 *save)
ia32f2ia64f(&ptp->f11, fpregp);
copy_from_user(fpregp, &save->_st[(4+fr8_st_map)&0x7], sizeof(struct _fpreg_ia32));
__ldfe(12, fpregp);
ia64_ldfe(12, fpregp);
copy_from_user(fpregp, &save->_st[(5+fr8_st_map)&0x7], sizeof(struct _fpreg_ia32));
__ldfe(13, fpregp);
ia64_ldfe(13, fpregp);
copy_from_user(fpregp, &save->_st[(6+fr8_st_map)&0x7], sizeof(struct _fpreg_ia32));
__ldfe(14, fpregp);
ia64_ldfe(14, fpregp);
copy_from_user(fpregp, &save->_st[(7+fr8_st_map)&0x7], sizeof(struct _fpreg_ia32));
__ldfe(15, fpregp);
ia64_ldfe(15, fpregp);
copy_from_user(num128, &save->_xmm[0], sizeof(struct _xmmreg_ia32));
__ldf8(16, &num128[0]);
__ldf8(17, &num128[1]);
ia64_ldf8(16, &num128[0]);
ia64_ldf8(17, &num128[1]);
copy_from_user(num128, &save->_xmm[1], sizeof(struct _xmmreg_ia32));
__ldf8(18, &num128[0]);
__ldf8(19, &num128[1]);
ia64_ldf8(18, &num128[0]);
ia64_ldf8(19, &num128[1]);
copy_from_user(num128, &save->_xmm[2], sizeof(struct _xmmreg_ia32));
__ldf8(20, &num128[0]);
__ldf8(21, &num128[1]);
ia64_ldf8(20, &num128[0]);
ia64_ldf8(21, &num128[1]);
copy_from_user(num128, &save->_xmm[3], sizeof(struct _xmmreg_ia32));
__ldf8(22, &num128[0]);
__ldf8(23, &num128[1]);
ia64_ldf8(22, &num128[0]);
ia64_ldf8(23, &num128[1]);
copy_from_user(num128, &save->_xmm[4], sizeof(struct _xmmreg_ia32));
__ldf8(24, &num128[0]);
__ldf8(25, &num128[1]);
ia64_ldf8(24, &num128[0]);
ia64_ldf8(25, &num128[1]);
copy_from_user(num128, &save->_xmm[5], sizeof(struct _xmmreg_ia32));
__ldf8(26, &num128[0]);
__ldf8(27, &num128[1]);
ia64_ldf8(26, &num128[0]);
ia64_ldf8(27, &num128[1]);
copy_from_user(num128, &save->_xmm[6], sizeof(struct _xmmreg_ia32));
__ldf8(28, &num128[0]);
__ldf8(29, &num128[1]);
ia64_ldf8(28, &num128[0]);
ia64_ldf8(29, &num128[1]);
copy_from_user(num128, &save->_xmm[7], sizeof(struct _xmmreg_ia32));
__ldf8(30, &num128[0]);
__ldf8(31, &num128[1]);
ia64_ldf8(30, &num128[0]);
ia64_ldf8(31, &num128[1]);
return 0;
}
......@@ -705,7 +689,7 @@ setup_sigcontext_ia32 (struct sigcontext_ia32 *sc, struct _fpstate_ia32 *fpstate
/*
* `eflags' is in an ar register for this context
*/
asm volatile ("mov %0=ar.eflag ;;" : "=r"(flag));
flag = ia64_getreg(_IA64_REG_AR_EFLAG);
err |= __put_user((unsigned int)flag, &sc->eflags);
err |= __put_user(regs->r12, &sc->esp_at_signal);
err |= __put_user((regs->r17 >> 16) & 0xffff, (unsigned int *)&sc->ss);
......@@ -790,10 +774,10 @@ restore_sigcontext_ia32 (struct pt_regs *regs, struct sigcontext_ia32 *sc, int *
* IA32 process's context.
*/
err |= __get_user(tmpflags, &sc->eflags);
asm volatile ("mov %0=ar.eflag ;;" : "=r"(flag));
flag = ia64_getreg(_IA64_REG_AR_EFLAG);
flag &= ~0x40DD5;
flag |= (tmpflags & 0x40DD5);
asm volatile ("mov ar.eflag=%0 ;;" :: "r"(flag));
ia64_setreg(_IA64_REG_AR_EFLAG, flag);
regs->r1 = -1; /* disable syscall checks, r1 is orig_eax */
}
......
......@@ -18,6 +18,7 @@
#include <linux/personality.h>
#include <linux/sched.h>
#include <asm/intrinsics.h>
#include <asm/page.h>
#include <asm/pgtable.h>
#include <asm/system.h>
......@@ -68,19 +69,11 @@ ia32_load_segment_descriptors (struct task_struct *task)
void
ia32_save_state (struct task_struct *t)
{
unsigned long eflag, fsr, fcr, fir, fdr;
asm ("mov %0=ar.eflag;"
"mov %1=ar.fsr;"
"mov %2=ar.fcr;"
"mov %3=ar.fir;"
"mov %4=ar.fdr;"
: "=r"(eflag), "=r"(fsr), "=r"(fcr), "=r"(fir), "=r"(fdr));
t->thread.eflag = eflag;
t->thread.fsr = fsr;
t->thread.fcr = fcr;
t->thread.fir = fir;
t->thread.fdr = fdr;
t->thread.eflag = ia64_getreg(_IA64_REG_AR_EFLAG);
t->thread.fsr = ia64_getreg(_IA64_REG_AR_FSR);
t->thread.fcr = ia64_getreg(_IA64_REG_AR_FCR);
t->thread.fir = ia64_getreg(_IA64_REG_AR_FIR);
t->thread.fdr = ia64_getreg(_IA64_REG_AR_FDR);
ia64_set_kr(IA64_KR_IO_BASE, t->thread.old_iob);
ia64_set_kr(IA64_KR_TSSD, t->thread.old_k1);
}
......@@ -99,12 +92,11 @@ ia32_load_state (struct task_struct *t)
fdr = t->thread.fdr;
tssd = load_desc(_TSS(nr)); /* TSSD */
asm volatile ("mov ar.eflag=%0;"
"mov ar.fsr=%1;"
"mov ar.fcr=%2;"
"mov ar.fir=%3;"
"mov ar.fdr=%4;"
:: "r"(eflag), "r"(fsr), "r"(fcr), "r"(fir), "r"(fdr));
ia64_setreg(_IA64_REG_AR_EFLAG, eflag);
ia64_setreg(_IA64_REG_AR_FSR, fsr);
ia64_setreg(_IA64_REG_AR_FCR, fcr);
ia64_setreg(_IA64_REG_AR_FIR, fir);
ia64_setreg(_IA64_REG_AR_FDR, fdr);
current->thread.old_iob = ia64_get_kr(IA64_KR_IO_BASE);
current->thread.old_k1 = ia64_get_kr(IA64_KR_TSSD);
ia64_set_kr(IA64_KR_IO_BASE, IA32_IOBASE);
......@@ -178,7 +170,7 @@ void
ia32_cpu_init (void)
{
/* initialize global ia32 state - CR0 and CR4 */
asm volatile ("mov ar.cflg = %0" :: "r" (((ulong) IA32_CR4 << 32) | IA32_CR0));
ia64_setreg(_IA64_REG_AR_CFLAG, (((ulong) IA32_CR4 << 32) | IA32_CR0));
}
static int __init
......
......@@ -14,6 +14,7 @@
#include "ia32priv.h"
#include <asm/intrinsics.h>
#include <asm/ptrace.h>
int
......@@ -93,9 +94,8 @@ ia32_exception (struct pt_regs *regs, unsigned long isr)
{
unsigned long fsr, fcr;
asm ("mov %0=ar.fsr;"
"mov %1=ar.fcr;"
: "=r"(fsr), "=r"(fcr));
fsr = ia64_getreg(_IA64_REG_AR_FSR);
fcr = ia64_getreg(_IA64_REG_AR_FCR);
siginfo.si_signo = SIGFPE;
/*
......
......@@ -445,17 +445,19 @@ extern int ia32_setup_arg_pages (struct linux_binprm *bprm);
extern unsigned long ia32_do_mmap (struct file *, unsigned long, unsigned long, int, int, loff_t);
extern void ia32_load_segment_descriptors (struct task_struct *task);
#define ia32f2ia64f(dst,src) \
do { \
register double f6 asm ("f6"); \
asm volatile ("ldfe f6=[%2];; stf.spill [%1]=f6" : "=f"(f6): "r"(dst), "r"(src) : "memory"); \
} while(0)
#define ia64f2ia32f(dst,src) \
do { \
register double f6 asm ("f6"); \
asm volatile ("ldf.fill f6=[%2];; stfe [%1]=f6" : "=f"(f6): "r"(dst), "r"(src) : "memory"); \
} while(0)
#define ia32f2ia64f(dst,src) \
do { \
ia64_ldfe(6,src); \
ia64_stop(); \
ia64_stf_spill(dst, 6); \
} while(0)
#define ia64f2ia32f(dst,src) \
do { \
ia64_ldf_fill(6, src); \
ia64_stop(); \
ia64_stfe(dst, 6); \
} while(0)
struct user_regs_struct32 {
__u32 ebx, ecx, edx, esi, edi, ebp, eax;
......@@ -468,11 +470,8 @@ struct user_regs_struct32 {
};
/* Prototypes for use in elfcore32.h */
int save_ia32_fpstate (struct task_struct *tsk,
struct ia32_user_i387_struct *save);
int save_ia32_fpxstate (struct task_struct *tsk,
struct ia32_user_fxsr_struct *save);
extern int save_ia32_fpstate (struct task_struct *tsk, struct ia32_user_i387_struct *save);
extern int save_ia32_fpxstate (struct task_struct *tsk, struct ia32_user_fxsr_struct *save);
#endif /* !CONFIG_IA32_SUPPORT */
......
......@@ -51,9 +51,10 @@
#include <linux/compat.h>
#include <linux/vfs.h>
#include <asm/intrinsics.h>
#include <asm/semaphore.h>
#include <asm/types.h>
#include <asm/uaccess.h>
#include <asm/semaphore.h>
#include "ia32priv.h"
......@@ -2192,7 +2193,7 @@ sys32_iopl (int level)
if (level != 3)
return(-EINVAL);
/* Trying to gain more privileges? */
asm volatile ("mov %0=ar.eflag ;;" : "=r"(old));
old = ia64_getreg(_IA64_REG_AR_EFLAG);
if ((unsigned int) level > ((old >> 12) & 3)) {
if (!capable(CAP_SYS_RAWIO))
return -EPERM;
......@@ -2216,7 +2217,7 @@ sys32_iopl (int level)
if (addr >= 0) {
old = (old & ~0x3000) | (level << 12);
asm volatile ("mov ar.eflag=%0;;" :: "r"(old));
ia64_setreg(_IA64_REG_AR_EFLAG, old);
}
fput(file);
......
......@@ -471,6 +471,18 @@ GLOBAL_ENTRY(__ia64_syscall)
br.ret.sptk.many rp
END(__ia64_syscall)
GLOBAL_ENTRY(execve)
mov r15=__NR_execve // put syscall number in place
break __BREAK_SYSCALL
br.ret.sptk.many rp
END(execve)
GLOBAL_ENTRY(clone)
mov r15=__NR_clone // put syscall number in place
break __BREAK_SYSCALL
br.ret.sptk.many rp
END(clone)
/*
* We invoke syscall_trace through this intermediate function to
* ensure that the syscall input arguments are not clobbered. We
......
......@@ -39,4 +39,4 @@ static union {
.thread_info = INIT_THREAD_INFO(init_task_mem.s.task)
}};
asm (".global init_task; init_task = init_task_mem");
extern struct task_struct init_task __attribute__ ((alias("init_task_mem")));
......@@ -495,7 +495,7 @@ iosapic_register_intr (unsigned int gsi,
unsigned long polarity, unsigned long trigger)
{
int vector;
unsigned int dest = (ia64_get_lid() >> 16) & 0xffff;
unsigned int dest = (ia64_getreg(_IA64_REG_CR_LID) >> 16) & 0xffff;
vector = gsi_to_vector(gsi);
if (vector < 0)
......@@ -572,7 +572,7 @@ iosapic_override_isa_irq (unsigned int isa_irq, unsigned int gsi,
unsigned long trigger)
{
int vector;
unsigned int dest = (ia64_get_lid() >> 16) & 0xffff;
unsigned int dest = (ia64_getreg(_IA64_REG_CR_LID) >> 16) & 0xffff;
vector = isa_irq_to_vector(isa_irq);
......@@ -666,11 +666,11 @@ iosapic_enable_intr (unsigned int vector)
* Direct the interrupt vector to the current cpu, platform redirection
* will distribute them.
*/
dest = (ia64_get_lid() >> 16) & 0xffff;
dest = (ia64_getreg(_IA64_REG_CR_LID) >> 16) & 0xffff;
}
#else
/* direct the interrupt vector to the running cpu id */
dest = (ia64_get_lid() >> 16) & 0xffff;
dest = (ia64_getreg(_IA64_REG_CR_LID) >> 16) & 0xffff;
#endif
set_rte(vector, dest);
......
......@@ -30,6 +30,7 @@
#include <asm/bitops.h>
#include <asm/delay.h>
#include <asm/intrinsics.h>
#include <asm/io.h>
#include <asm/hw_irq.h>
#include <asm/machvec.h>
......@@ -93,8 +94,8 @@ ia64_handle_irq (ia64_vector vector, struct pt_regs *regs)
* because the register and the memory stack are not
* switched atomically.
*/
asm ("mov %0=ar.bsp" : "=r"(bsp));
asm ("mov %0=sp" : "=r"(sp));
bsp = ia64_getreg(_IA64_REG_AR_BSP);
sp = ia64_getreg(_IA64_REG_AR_SP);
if ((sp - bsp) < 1024) {
static unsigned char count;
......@@ -117,11 +118,11 @@ ia64_handle_irq (ia64_vector vector, struct pt_regs *regs)
* 16 (without this, it would be ~240, which could easily lead
* to kernel stack overflows).
*/
saved_tpr = ia64_get_tpr();
saved_tpr = ia64_getreg(_IA64_REG_CR_TPR);
ia64_srlz_d();
while (vector != IA64_SPURIOUS_INT_VECTOR) {
if (!IS_RESCHEDULE(vector)) {
ia64_set_tpr(vector);
ia64_setreg(_IA64_REG_CR_TPR, vector);
ia64_srlz_d();
do_IRQ(local_vector_to_irq(vector), regs);
......@@ -130,7 +131,7 @@ ia64_handle_irq (ia64_vector vector, struct pt_regs *regs)
* Disable interrupts and send EOI:
*/
local_irq_disable();
ia64_set_tpr(saved_tpr);
ia64_setreg(_IA64_REG_CR_TPR, saved_tpr);
}
ia64_eoi();
vector = ia64_get_ivr();
......@@ -193,7 +194,7 @@ ia64_send_ipi (int cpu, int vector, int delivery_mode, int redirect)
#ifdef CONFIG_SMP
phys_cpu_id = cpu_physical_id(cpu);
#else
phys_cpu_id = (ia64_get_lid() >> 16) & 0xffff;
phys_cpu_id = (ia64_getreg(_IA64_REG_CR_LID) >> 16) & 0xffff;
#endif
/*
......
......@@ -505,14 +505,14 @@ ia64_mca_cmc_vector_setup (void)
cmcv.cmcv_regval = 0;
cmcv.cmcv_mask = 0; /* Unmask/enable interrupt */
cmcv.cmcv_vector = IA64_CMC_VECTOR;
ia64_set_cmcv(cmcv.cmcv_regval);
ia64_setreg(_IA64_REG_CR_CMCV, cmcv.cmcv_regval);
IA64_MCA_DEBUG("ia64_mca_platform_init: CPU %d corrected "
"machine check vector %#x setup and enabled.\n",
smp_processor_id(), IA64_CMC_VECTOR);
IA64_MCA_DEBUG("ia64_mca_platform_init: CPU %d CMCV = %#016lx\n",
smp_processor_id(), ia64_get_cmcv());
smp_processor_id(), ia64_getreg(_IA64_REG_CR_CMCV));
}
/*
......@@ -531,11 +531,11 @@ void
ia64_mca_cmc_vector_disable (void *dummy)
{
cmcv_reg_t cmcv;
cmcv = (cmcv_reg_t)ia64_get_cmcv();
cmcv = (cmcv_reg_t)ia64_getreg(_IA64_REG_CR_CMCV);
cmcv.cmcv_mask = 1; /* Mask/disable interrupt */
ia64_set_cmcv(cmcv.cmcv_regval);
ia64_setreg(_IA64_REG_CR_CMCV, cmcv.cmcv_regval)
IA64_MCA_DEBUG("ia64_mca_cmc_vector_disable: CPU %d corrected "
"machine check vector %#x disabled.\n",
......@@ -558,11 +558,11 @@ void
ia64_mca_cmc_vector_enable (void *dummy)
{
cmcv_reg_t cmcv;
cmcv = (cmcv_reg_t)ia64_get_cmcv();
cmcv = (cmcv_reg_t)ia64_getreg(_IA64_REG_CR_CMCV);
cmcv.cmcv_mask = 0; /* Unmask/enable interrupt */
ia64_set_cmcv(cmcv.cmcv_regval);
ia64_setreg(_IA64_REG_CR_CMCV, cmcv.cmcv_regval)
IA64_MCA_DEBUG("ia64_mca_cmc_vector_enable: CPU %d corrected "
"machine check vector %#x enabled.\n",
......@@ -727,10 +727,10 @@ ia64_mca_init(void)
/* Register the os init handler with SAL */
if ((rc = ia64_sal_set_vectors(SAL_VECTOR_OS_INIT,
ia64_mc_info.imi_monarch_init_handler,
ia64_tpa(ia64_get_gp()),
ia64_tpa(ia64_getreg(_IA64_REG_GP)),
ia64_mc_info.imi_monarch_init_handler_size,
ia64_mc_info.imi_slave_init_handler,
ia64_tpa(ia64_get_gp()),
ia64_tpa(ia64_getreg(_IA64_REG_GP)),
ia64_mc_info.imi_slave_init_handler_size)))
{
printk(KERN_ERR "ia64_mca_init: Failed to register m/s init handlers with SAL. "
......@@ -816,16 +816,16 @@ ia64_mca_wakeup_ipi_wait(void)
do {
switch(irr_num) {
case 0:
irr = ia64_get_irr0();
irr = ia64_getreg(_IA64_REG_CR_IRR0);
break;
case 1:
irr = ia64_get_irr1();
irr = ia64_getreg(_IA64_REG_CR_IRR1);
break;
case 2:
irr = ia64_get_irr2();
irr = ia64_getreg(_IA64_REG_CR_IRR2);
break;
case 3:
irr = ia64_get_irr3();
irr = ia64_getreg(_IA64_REG_CR_IRR3);
break;
}
} while (!(irr & (1 << irr_bit))) ;
......@@ -1146,7 +1146,7 @@ ia64_mca_cmc_int_caller(int cpe_irq, void *arg, struct pt_regs *ptregs)
ia64_mca_cmc_int_handler(cpe_irq, arg, ptregs);
for (++cpuid ; !cpu_online(cpuid) && cpuid < NR_CPUS ; cpuid++);
if (cpuid < NR_CPUS) {
platform_send_ipi(cpuid, IA64_CMCP_VECTOR, IA64_IPI_DM_INT, 0);
} else {
......@@ -1176,7 +1176,7 @@ ia64_mca_cmc_int_caller(int cpe_irq, void *arg, struct pt_regs *ptregs)
start_count = -1;
}
return IRQ_HANDLED;
}
......
This diff is collapsed.
......@@ -109,21 +109,15 @@ default_init(struct task_struct *task, void *buf, unsigned int flags, int cpu, v
}
static int
default_handler(struct task_struct *task, void *buf, pfm_ovfl_arg_t *arg, struct pt_regs *regs)
default_handler(struct task_struct *task, void *buf, pfm_ovfl_arg_t *arg, struct pt_regs *regs, unsigned long stamp)
{
pfm_default_smpl_hdr_t *hdr;
pfm_default_smpl_entry_t *ent;
void *cur, *last;
unsigned long *e;
unsigned long ovfl_mask;
unsigned long ovfl_notify;
unsigned long stamp;
unsigned int npmds, i;
/*
* some time stamp
*/
stamp = ia64_get_itc();
unsigned char ovfl_pmd;
unsigned char ovfl_notify;
if (unlikely(buf == NULL || arg == NULL|| regs == NULL || task == NULL)) {
DPRINT(("[%d] invalid arguments buf=%p arg=%p\n", task->pid, buf, arg));
......@@ -133,8 +127,8 @@ default_handler(struct task_struct *task, void *buf, pfm_ovfl_arg_t *arg, struct
hdr = (pfm_default_smpl_hdr_t *)buf;
cur = hdr->hdr_cur_pos;
last = hdr->hdr_last_pos;
ovfl_mask = arg->ovfl_pmds[0];
ovfl_notify = arg->ovfl_notify[0];
ovfl_pmd = arg->ovfl_pmd;
ovfl_notify = arg->ovfl_notify;
/*
* check for space against largest possibly entry.
......@@ -153,12 +147,12 @@ default_handler(struct task_struct *task, void *buf, pfm_ovfl_arg_t *arg, struct
hdr->hdr_count++;
DPRINT_ovfl(("[%d] count=%lu cur=%p last=%p free_bytes=%lu ovfl_pmds=0x%lx ovfl_notify=0x%lx npmds=%u\n",
DPRINT_ovfl(("[%d] count=%lu cur=%p last=%p free_bytes=%lu ovfl_pmd=%d ovfl_notify=%d npmds=%u\n",
task->pid,
hdr->hdr_count,
cur, last,
last-cur,
ovfl_mask,
ovfl_pmd,
ovfl_notify, npmds));
/*
......@@ -172,7 +166,7 @@ default_handler(struct task_struct *task, void *buf, pfm_ovfl_arg_t *arg, struct
* - this is not necessarily the task controlling the session
*/
ent->pid = current->pid;
ent->cpu = smp_processor_id();
ent->ovfl_pmd = ovfl_pmd;
ent->last_reset_val = arg->pmd_last_reset; //pmd[0].reg_last_reset_val;
/*
......@@ -180,13 +174,9 @@ default_handler(struct task_struct *task, void *buf, pfm_ovfl_arg_t *arg, struct
*/
ent->ip = regs->cr_iip | ((regs->cr_ipsr >> 41) & 0x3);
/*
* which registers overflowed
*/
ent->ovfl_pmds = ovfl_mask;
ent->tstamp = stamp;
ent->cpu = smp_processor_id();
ent->set = arg->active_set;
ent->reserved1 = 0;
/*
* selectively store PMDs in increasing index number
......@@ -206,14 +196,14 @@ default_handler(struct task_struct *task, void *buf, pfm_ovfl_arg_t *arg, struct
/*
* keep same ovfl_pmds, ovfl_notify
*/
arg->ovfl_ctrl.notify_user = 0;
arg->ovfl_ctrl.block = 0;
arg->ovfl_ctrl.stop_monitoring = 0;
arg->ovfl_ctrl.reset_pmds = 1;
arg->ovfl_ctrl.bits.notify_user = 0;
arg->ovfl_ctrl.bits.block_task = 0;
arg->ovfl_ctrl.bits.mask_monitoring = 0;
arg->ovfl_ctrl.bits.reset_ovfl_pmds = 1; /* reset before returning from interrupt handler */
return 0;
full:
DPRINT_ovfl(("sampling buffer full free=%lu, count=%lu, ovfl_notify=0x%lx\n", last-cur, hdr->hdr_count, ovfl_notify));
DPRINT_ovfl(("sampling buffer full free=%lu, count=%lu, ovfl_notify=%d\n", last-cur, hdr->hdr_count, ovfl_notify));
/*
* increment number of buffer overflow.
......@@ -222,22 +212,21 @@ default_handler(struct task_struct *task, void *buf, pfm_ovfl_arg_t *arg, struct
hdr->hdr_overflows++;
/*
* if no notification is needed, then we just reset the buffer index.
* if no notification is needed, then we saturate the buffer
*/
if (ovfl_notify == 0UL) {
if (ovfl_notify == 0) {
hdr->hdr_count = 0UL;
arg->ovfl_ctrl.notify_user = 0;
arg->ovfl_ctrl.block = 0;
arg->ovfl_ctrl.stop_monitoring = 0;
arg->ovfl_ctrl.reset_pmds = 1;
arg->ovfl_ctrl.bits.notify_user = 0;
arg->ovfl_ctrl.bits.block_task = 0;
arg->ovfl_ctrl.bits.mask_monitoring = 1;
arg->ovfl_ctrl.bits.reset_ovfl_pmds = 0;
} else {
/* keep same ovfl_pmds, ovfl_notify */
arg->ovfl_ctrl.notify_user = 1;
arg->ovfl_ctrl.block = 1;
arg->ovfl_ctrl.stop_monitoring = 1;
arg->ovfl_ctrl.reset_pmds = 0;
arg->ovfl_ctrl.bits.notify_user = 1;
arg->ovfl_ctrl.bits.block_task = 1; /* ignored for non-blocking context */
arg->ovfl_ctrl.bits.mask_monitoring = 1;
arg->ovfl_ctrl.bits.reset_ovfl_pmds = 0; /* no reset now */
}
return 0;
return -1; /* we are full, sorry */
}
static int
......@@ -250,8 +239,8 @@ default_restart(struct task_struct *task, pfm_ovfl_ctrl_t *ctrl, void *buf, stru
hdr->hdr_count = 0UL;
hdr->hdr_cur_pos = (void *)((unsigned long)buf)+sizeof(*hdr);
ctrl->stop_monitoring = 0;
ctrl->reset_pmds = PFM_PMD_LONG_RESET;
ctrl->bits.mask_monitoring = 0;
ctrl->bits.reset_ovfl_pmds = 1; /* uses long-reset values */
return 0;
}
......@@ -264,15 +253,16 @@ default_exit(struct task_struct *task, void *buf, struct pt_regs *regs)
}
static pfm_buffer_fmt_t default_fmt={
.fmt_name = "default_format",
.fmt_uuid = PFM_DEFAULT_SMPL_UUID,
.fmt_arg_size = sizeof(pfm_default_smpl_arg_t),
.fmt_validate = default_validate,
.fmt_getsize = default_get_size,
.fmt_init = default_init,
.fmt_handler = default_handler,
.fmt_restart = default_restart,
.fmt_exit = default_exit,
.fmt_name = "default_format",
.fmt_uuid = PFM_DEFAULT_SMPL_UUID,
.fmt_arg_size = sizeof(pfm_default_smpl_arg_t),
.fmt_validate = default_validate,
.fmt_getsize = default_get_size,
.fmt_init = default_init,
.fmt_handler = default_handler,
.fmt_restart = default_restart,
.fmt_restart_active = default_restart,
.fmt_exit = default_exit,
};
static int __init
......
......@@ -741,8 +741,8 @@ cpu_init (void)
* shouldn't be affected by this (moral: keep your ia32 locks aligned and you'll
* be fine).
*/
ia64_set_dcr( IA64_DCR_DP | IA64_DCR_DK | IA64_DCR_DX | IA64_DCR_DR
| IA64_DCR_DA | IA64_DCR_DD | IA64_DCR_LC);
ia64_setreg(_IA64_REG_CR_DCR, ( IA64_DCR_DP | IA64_DCR_DK | IA64_DCR_DX | IA64_DCR_DR
| IA64_DCR_DA | IA64_DCR_DD | IA64_DCR_LC));
atomic_inc(&init_mm.mm_count);
current->active_mm = &init_mm;
if (current->mm)
......@@ -758,11 +758,11 @@ cpu_init (void)
ia64_set_itv(1 << 16);
ia64_set_lrr0(1 << 16);
ia64_set_lrr1(1 << 16);
ia64_set_pmv(1 << 16);
ia64_set_cmcv(1 << 16);
ia64_setreg(_IA64_REG_CR_PMV, 1 << 16);
ia64_setreg(_IA64_REG_CR_CMCV, 1 << 16);
/* clear TPR & XTP to enable all interrupt classes: */
ia64_set_tpr(0);
ia64_setreg(_IA64_REG_CR_TPR, 0);
#ifdef CONFIG_SMP
normal_xtp();
#endif
......
/*
* Architecture-specific signal handling support.
*
* Copyright (C) 1999-2002 Hewlett-Packard Co
* Copyright (C) 1999-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*
* Derived from i386 and Alpha versions.
......@@ -23,6 +23,7 @@
#include <linux/wait.h>
#include <asm/ia32.h>
#include <asm/intrinsics.h>
#include <asm/uaccess.h>
#include <asm/rse.h>
#include <asm/sigcontext.h>
......@@ -41,6 +42,12 @@
# define GET_SIGSET(k,u) __get_user((k)->sig[0], &(u)->sig[0])
#endif
#ifdef ASM_SUPPORTED
/*
* Don't let GCC uses f16-f31 so that when we setup/restore the registers in the signal
* context in __kernel_sigtramp(), we can be sure that registers f16-f31 contain user-level
* values.
*/
register double f16 asm ("f16"); register double f17 asm ("f17");
register double f18 asm ("f18"); register double f19 asm ("f19");
register double f20 asm ("f20"); register double f21 asm ("f21");
......@@ -50,6 +57,7 @@ register double f24 asm ("f24"); register double f25 asm ("f25");
register double f26 asm ("f26"); register double f27 asm ("f27");
register double f28 asm ("f28"); register double f29 asm ("f29");
register double f30 asm ("f30"); register double f31 asm ("f31");
#endif
long
ia64_rt_sigsuspend (sigset_t *uset, size_t sigsetsize, struct sigscratch *scr)
......@@ -192,7 +200,7 @@ copy_siginfo_to_user (siginfo_t *to, siginfo_t *from)
case __SI_TIMER >> 16:
err |= __put_user(from->si_tid, &to->si_tid);
err |= __put_user(from->si_overrun, &to->si_overrun);
err |= __put_user(from->si_value, &to->si_value);
err |= __put_user(from->si_ptr, &to->si_ptr);
break;
case __SI_CHLD >> 16:
err |= __put_user(from->si_utime, &to->si_utime);
......@@ -592,10 +600,8 @@ ia64_do_signal (sigset_t *oldset, struct sigscratch *scr, long in_syscall)
if (IS_IA32_PROCESS(&scr->pt)) {
scr->pt.r8 = scr->pt.r1;
scr->pt.cr_iip -= 2;
if (errno == ERESTART_RESTARTBLOCK) {
if (errno == ERESTART_RESTARTBLOCK)
scr->pt.r8 = 0; /* x86 version of __NR_restart_syscall */
scr->pt.cr_iip -= 2;
}
} else {
/*
* Note: the syscall number is in r15 which is saved in
......
......@@ -7,6 +7,20 @@
* 05/12/00 grao <goutham.rao@intel.com> : added isr in siginfo for SIGFPE
*/
#include <linux/config.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/sched.h>
#include <linux/tty.h>
#include <linux/vt_kern.h> /* For unblank_screen() */
#include <asm/fpswa.h>
#include <asm/hardirq.h>
#include <asm/ia32.h>
#include <asm/intrinsics.h>
#include <asm/processor.h>
#include <asm/uaccess.h>
/*
* fp_emulate() needs to be able to access and update all floating point registers. Those
* saved in pt_regs can be accessed through that structure, but those not saved, will be
......@@ -15,6 +29,7 @@
* by declaring preserved registers that are not marked as "fixed" as global register
* variables.
*/
#ifdef ASM_SUPPORTED
register double f2 asm ("f2"); register double f3 asm ("f3");
register double f4 asm ("f4"); register double f5 asm ("f5");
......@@ -27,20 +42,7 @@ register double f24 asm ("f24"); register double f25 asm ("f25");
register double f26 asm ("f26"); register double f27 asm ("f27");
register double f28 asm ("f28"); register double f29 asm ("f29");
register double f30 asm ("f30"); register double f31 asm ("f31");
#include <linux/config.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/sched.h>
#include <linux/tty.h>
#include <linux/vt_kern.h> /* For unblank_screen() */
#include <asm/hardirq.h>
#include <asm/ia32.h>
#include <asm/processor.h>
#include <asm/uaccess.h>
#include <asm/fpswa.h>
#endif
extern spinlock_t timerlist_lock;
......@@ -357,6 +359,10 @@ handle_fpu_swa (int fp_fault, struct pt_regs *regs, unsigned long isr)
siginfo.si_addr = (void *) (regs->cr_iip + ia64_psr(regs)->ri);
if (isr & 0x11) {
siginfo.si_code = FPE_FLTINV;
} else if (isr & 0x22) {
/* denormal operand gets the same si_code as underflow
* see arch/i386/kernel/traps.c:math_error() */
siginfo.si_code = FPE_FLTUND;
} else if (isr & 0x44) {
siginfo.si_code = FPE_FLTDIV;
}
......
......@@ -18,9 +18,10 @@
#include <linux/smp_lock.h>
#include <linux/tty.h>
#include <asm/uaccess.h>
#include <asm/rse.h>
#include <asm/intrinsics.h>
#include <asm/processor.h>
#include <asm/rse.h>
#include <asm/uaccess.h>
#include <asm/unaligned.h>
extern void die_if_kernel(char *str, struct pt_regs *regs, long err) __attribute__ ((noreturn));
......@@ -231,7 +232,7 @@ static u16 fr_info[32]={
static void
invala_gr (int regno)
{
# define F(reg) case reg: __asm__ __volatile__ ("invala.e r%0" :: "i"(reg)); break
# define F(reg) case reg: ia64_invala_gr(reg); break
switch (regno) {
F( 0); F( 1); F( 2); F( 3); F( 4); F( 5); F( 6); F( 7);
......@@ -258,7 +259,7 @@ invala_gr (int regno)
static void
invala_fr (int regno)
{
# define F(reg) case reg: __asm__ __volatile__ ("invala.e f%0" :: "i"(reg)); break
# define F(reg) case reg: ia64_invala_fr(reg); break
switch (regno) {
F( 0); F( 1); F( 2); F( 3); F( 4); F( 5); F( 6); F( 7);
......@@ -554,13 +555,13 @@ setfpreg (unsigned long regnum, struct ia64_fpreg *fpval, struct pt_regs *regs)
static inline void
float_spill_f0 (struct ia64_fpreg *final)
{
__asm__ __volatile__ ("stf.spill [%0]=f0" :: "r"(final) : "memory");
ia64_stf_spill(final, 0);
}
static inline void
float_spill_f1 (struct ia64_fpreg *final)
{
__asm__ __volatile__ ("stf.spill [%0]=f1" :: "r"(final) : "memory");
ia64_stf_spill(final, 1);
}
static void
......@@ -954,57 +955,65 @@ static const unsigned char float_fsz[4]={
static inline void
mem2float_extended (struct ia64_fpreg *init, struct ia64_fpreg *final)
{
__asm__ __volatile__ ("ldfe f6=[%0];; stf.spill [%1]=f6"
:: "r"(init), "r"(final) : "f6","memory");
ia64_ldfe(6, init);
ia64_stop();
ia64_stf_spill(final, 6);
}
static inline void
mem2float_integer (struct ia64_fpreg *init, struct ia64_fpreg *final)
{
__asm__ __volatile__ ("ldf8 f6=[%0];; stf.spill [%1]=f6"
:: "r"(init), "r"(final) : "f6","memory");
ia64_ldf8(6, init);
ia64_stop();
ia64_stf_spill(final, 6);
}
static inline void
mem2float_single (struct ia64_fpreg *init, struct ia64_fpreg *final)
{
__asm__ __volatile__ ("ldfs f6=[%0];; stf.spill [%1]=f6"
:: "r"(init), "r"(final) : "f6","memory");
ia64_ldfs(6, init);
ia64_stop();
ia64_stf_spill(final, 6);
}
static inline void
mem2float_double (struct ia64_fpreg *init, struct ia64_fpreg *final)
{
__asm__ __volatile__ ("ldfd f6=[%0];; stf.spill [%1]=f6"
:: "r"(init), "r"(final) : "f6","memory");
ia64_ldfd(6, init);
ia64_stop();
ia64_stf_spill(final, 6);
}
static inline void
float2mem_extended (struct ia64_fpreg *init, struct ia64_fpreg *final)
{
__asm__ __volatile__ ("ldf.fill f6=[%0];; stfe [%1]=f6"
:: "r"(init), "r"(final) : "f6","memory");
ia64_ldf_fill(6, init);
ia64_stop();
ia64_stfe(final, 6);
}
static inline void
float2mem_integer (struct ia64_fpreg *init, struct ia64_fpreg *final)
{
__asm__ __volatile__ ("ldf.fill f6=[%0];; stf8 [%1]=f6"
:: "r"(init), "r"(final) : "f6","memory");
ia64_ldf_fill(6, init);
ia64_stop();
ia64_stf8(final, 6);
}
static inline void
float2mem_single (struct ia64_fpreg *init, struct ia64_fpreg *final)
{
__asm__ __volatile__ ("ldf.fill f6=[%0];; stfs [%1]=f6"
:: "r"(init), "r"(final) : "f6","memory");
ia64_ldf_fill(6, init);
ia64_stop();
ia64_stfs(final, 6);
}
static inline void
float2mem_double (struct ia64_fpreg *init, struct ia64_fpreg *final)
{
__asm__ __volatile__ ("ldf.fill f6=[%0];; stfd [%1]=f6"
:: "r"(init), "r"(final) : "f6","memory");
ia64_ldf_fill(6, init);
ia64_stop();
ia64_stfd(final, 6);
}
static int
......
......@@ -35,6 +35,7 @@ SECTIONS
{
*(.text.ivt)
*(.text)
*(.gnu.linkonce.t*)
}
.text2 : AT(ADDR(.text2) - LOAD_OFFSET)
{ *(.text2) }
......@@ -183,7 +184,7 @@ SECTIONS
. = __phys_per_cpu_start + PERCPU_PAGE_SIZE; /* ensure percpu data fits into percpu page size */
.data : AT(ADDR(.data) - LOAD_OFFSET)
{ *(.data) *(.gnu.linkonce.d*) CONSTRUCTORS }
{ *(.data) *(.data1) *(.gnu.linkonce.d*) CONSTRUCTORS }
. = ALIGN(16);
__gp = . + 0x200000; /* gp must be 16-byte aligned for exc. table */
......@@ -194,7 +195,7 @@ SECTIONS
can access them all, and initialized data all before uninitialized, so
we can shorten the on-disk segment size. */
.sdata : AT(ADDR(.sdata) - LOAD_OFFSET)
{ *(.sdata) }
{ *(.sdata) *(.sdata1) *(.srdata) }
_edata = .;
_bss = .;
.sbss : AT(ADDR(.sbss) - LOAD_OFFSET)
......
......@@ -96,8 +96,8 @@ ia64_global_tlb_purge (unsigned long start, unsigned long end, unsigned long nbi
/*
* Flush ALAT entries also.
*/
asm volatile ("ptc.ga %0,%1;;srlz.i;;" :: "r"(start), "r"(nbits<<2)
: "memory");
ia64_ptcga(start, (nbits<<2));
ia64_srlz_i();
start += (1UL << nbits);
} while (start < end);
}
......@@ -118,15 +118,13 @@ local_flush_tlb_all (void)
local_irq_save(flags);
for (i = 0; i < count0; ++i) {
for (j = 0; j < count1; ++j) {
asm volatile ("ptc.e %0" :: "r"(addr));
ia64_ptce(addr);
addr += stride1;
}
addr += stride0;
}
local_irq_restore(flags);
ia64_insn_group_barrier();
ia64_srlz_i(); /* srlz.i implies srlz.d */
ia64_insn_group_barrier();
}
void
......@@ -157,14 +155,12 @@ flush_tlb_range (struct vm_area_struct *vma, unsigned long start, unsigned long
platform_global_tlb_purge(start, end, nbits);
# else
do {
asm volatile ("ptc.l %0,%1" :: "r"(start), "r"(nbits<<2) : "memory");
ia64_ptcl(start, (nbits<<2));
start += (1UL << nbits);
} while (start < end);
# endif
ia64_insn_group_barrier();
ia64_srlz_i(); /* srlz.i implies srlz.d */
ia64_insn_group_barrier();
}
void __init
......
......@@ -200,7 +200,7 @@ efi_unimplemented (void)
#ifdef SGI_SN2
#undef cpu_physical_id
#define cpu_physical_id(cpuid) ((ia64_get_lid() >> 16) & 0xffff)
#define cpu_physical_id(cpuid) ((ia64_getreg(_IA64_REG_CR_LID) >> 16) & 0xffff)
void
fprom_send_cpei(void) {
......@@ -224,14 +224,14 @@ fprom_send_cpei(void) {
#endif
static long
static struct sal_ret_values
sal_emulator (long index, unsigned long in1, unsigned long in2,
unsigned long in3, unsigned long in4, unsigned long in5,
unsigned long in6, unsigned long in7)
{
register long r9 asm ("r9") = 0;
register long r10 asm ("r10") = 0;
register long r11 asm ("r11") = 0;
long r9 = 0;
long r10 = 0;
long r11 = 0;
long status;
/*
......@@ -338,7 +338,7 @@ sal_emulator (long index, unsigned long in1, unsigned long in2,
}
asm volatile ("" :: "r"(r9), "r"(r10), "r"(r11));
return status;
return ((struct sal_ret_values) {status, r9, r10, r11});
}
......
......@@ -292,16 +292,16 @@ sn_check_intr(int irq, pcibr_intr_t intr) {
irr_bit = irq_to_vector(irq) % 64;
switch (irr_reg_num) {
case 0:
irr_reg = ia64_get_irr0();
irr_reg = ia64_getreg(_IA64_REG_CR_IRR0);
break;
case 1:
irr_reg = ia64_get_irr1();
irr_reg = ia64_getreg(_IA64_REG_CR_IRR1);
break;
case 2:
irr_reg = ia64_get_irr2();
irr_reg = ia64_getreg(_IA64_REG_CR_IRR2);
break;
case 3:
irr_reg = ia64_get_irr3();
irr_reg = ia64_getreg(_IA64_REG_CR_IRR3);
break;
}
if (!test_bit(irr_bit, &irr_reg) ) {
......@@ -354,9 +354,9 @@ sn_get_next_bit(void) {
void
sn_set_tpr(int vector) {
if (vector > IA64_LAST_DEVICE_VECTOR || vector < IA64_FIRST_DEVICE_VECTOR) {
ia64_set_tpr(vector);
ia64_setreg(_IA64_REG_CR_TPR, vector);
} else {
ia64_set_tpr(IA64_LAST_DEVICE_VECTOR);
ia64_setreg(_IA64_REG_CR_TPR, IA64_LAST_DEVICE_VECTOR);
}
}
......
......@@ -395,7 +395,7 @@ sn_cpu_init(void)
return;
cpuid = smp_processor_id();
cpuphyid = ((ia64_get_lid() >> 16) & 0xffff);
cpuphyid = ((ia64_getreg(_IA64_REG_CR_LID) >> 16) & 0xffff);
nasid = cpu_physical_id_to_nasid(cpuphyid);
cnode = nasid_to_cnodeid(nasid);
slice = cpu_physical_id_to_slice(cpuphyid);
......
......@@ -11,81 +11,73 @@
#include <asm/sn/sn2/io.h>
#undef __sn_inb
#undef __sn_inw
#undef __sn_inl
#undef __sn_outb
#undef __sn_outw
#undef __sn_outl
#undef __sn_readb
#undef __sn_readw
#undef __sn_readl
#undef __sn_readq
unsigned int
sn_inb (unsigned long port)
__sn_inb (unsigned long port)
{
return __sn_inb(port);
return ___sn_inb(port);
}
unsigned int
sn_inw (unsigned long port)
__sn_inw (unsigned long port)
{
return __sn_inw(port);
return ___sn_inw(port);
}
unsigned int
sn_inl (unsigned long port)
__sn_inl (unsigned long port)
{
return __sn_inl(port);
return ___sn_inl(port);
}
void
sn_outb (unsigned char val, unsigned long port)
__sn_outb (unsigned char val, unsigned long port)
{
__sn_outb(val, port);
___sn_outb(val, port);
}
void
sn_outw (unsigned short val, unsigned long port)
__sn_outw (unsigned short val, unsigned long port)
{
__sn_outw(val, port);
___sn_outw(val, port);
}
void
sn_outl (unsigned int val, unsigned long port)
__sn_outl (unsigned int val, unsigned long port)
{
__sn_outl(val, port);
___sn_outl(val, port);
}
unsigned char
sn_readb (void *addr)
__sn_readb (void *addr)
{
return __sn_readb (addr);
return ___sn_readb (addr);
}
unsigned short
sn_readw (void *addr)
__sn_readw (void *addr)
{
return __sn_readw (addr);
return ___sn_readw (addr);
}
unsigned int
sn_readl (void *addr)
__sn_readl (void *addr)
{
return __sn_readl (addr);
return ___sn_readl (addr);
}
unsigned long
sn_readq (void *addr)
__sn_readq (void *addr)
{
return __sn_readq (addr);
return ___sn_readq (addr);
}
/* define aliases: */
asm (".global __sn_inb, __sn_inw, __sn_inl");
asm ("__sn_inb = sn_inb");
asm ("__sn_inw = sn_inw");
asm ("__sn_inl = sn_inl");
asm (".global __sn_outb, __sn_outw, __sn_outl");
asm ("__sn_outb = sn_outb");
asm ("__sn_outw = sn_outw");
asm ("__sn_outl = sn_outl");
asm (".global __sn_readb, __sn_readw, __sn_readl, __sn_readq");
asm ("__sn_readb = sn_readb");
asm ("__sn_readw = sn_readw");
asm ("__sn_readl = sn_readl");
asm ("__sn_readq = sn_readq");
......@@ -30,7 +30,7 @@ config ACPI
bool "Full ACPI Support"
depends on !X86_VISWS
depends on !IA64_HP_SIM
depends on IA64 || (X86 && ACPI_HT)
depends on IA64 || (X86 || ACPI_HT)
default y
---help---
Advanced Configuration and Power Interface (ACPI) support for
......
This diff is collapsed.
......@@ -13,80 +13,12 @@
#include "sleep.h"
#define ACPI_SYSTEM_FILE_SLEEP "sleep"
#define ACPI_SYSTEM_FILE_ALARM "alarm"
#define _COMPONENT ACPI_SYSTEM_COMPONENT
ACPI_MODULE_NAME ("sleep")
static int acpi_system_sleep_seq_show(struct seq_file *seq, void *offset)
{
int i;
ACPI_FUNCTION_TRACE("acpi_system_sleep_seq_show");
for (i = 0; i <= ACPI_STATE_S5; i++) {
if (sleep_states[i]) {
seq_printf(seq,"S%d ", i);
if (i == ACPI_STATE_S4 && acpi_gbl_FACS->S4bios_f)
seq_printf(seq, "S4bios ");
}
}
seq_puts(seq, "\n");
return 0;
}
static int acpi_system_sleep_open_fs(struct inode *inode, struct file *file)
{
return single_open(file, acpi_system_sleep_seq_show, PDE(inode)->data);
}
static int
acpi_system_write_sleep (
struct file *file,
const char *buffer,
size_t count,
loff_t *ppos)
{
acpi_status status = AE_ERROR;
char state_string[12] = {'\0'};
u32 state = 0;
ACPI_FUNCTION_TRACE("acpi_system_write_sleep");
if (count > sizeof(state_string) - 1)
goto Done;
if (copy_from_user(state_string, buffer, count))
return_VALUE(-EFAULT);
state_string[count] = '\0';
state = simple_strtoul(state_string, NULL, 0);
if (state < 1 || state > 4)
goto Done;
if (!sleep_states[state])
goto Done;
#ifdef CONFIG_SOFTWARE_SUSPEND
if (state == 4) {
software_suspend();
goto Done;
}
#endif
status = acpi_suspend(state);
Done:
if (ACPI_FAILURE(status))
return_VALUE(-EINVAL);
else
return_VALUE(count);
}
static int acpi_system_alarm_seq_show(struct seq_file *seq, void *offset)
{
u32 sec, min, hr;
......@@ -362,14 +294,6 @@ acpi_system_write_alarm (
}
static struct file_operations acpi_system_sleep_fops = {
.open = acpi_system_sleep_open_fs,
.read = seq_read,
.write = acpi_system_write_sleep,
.llseek = seq_lseek,
.release = single_release,
};
static struct file_operations acpi_system_alarm_fops = {
.open = acpi_system_alarm_open_fs,
.read = seq_read,
......@@ -383,12 +307,6 @@ static int acpi_sleep_proc_init(void)
{
struct proc_dir_entry *entry = NULL;
/* 'sleep' [R/W]*/
entry = create_proc_entry(ACPI_SYSTEM_FILE_SLEEP,
S_IFREG|S_IRUGO|S_IWUSR, acpi_root_dir);
if (entry)
entry->proc_fops = &acpi_system_sleep_fops;
/* 'alarm' [R/W] */
entry = create_proc_entry(ACPI_SYSTEM_FILE_ALARM,
S_IFREG|S_IRUGO|S_IWUSR, acpi_root_dir);
......
......@@ -25,7 +25,6 @@
#include "power.h"
LIST_HEAD(dpm_active);
LIST_HEAD(dpm_suspended);
LIST_HEAD(dpm_off);
LIST_HEAD(dpm_off_irq);
......@@ -76,6 +75,7 @@ int device_pm_add(struct device * dev)
pr_debug("PM: Adding info for %s:%s\n",
dev->bus ? dev->bus->name : "No Bus", dev->kobj.name);
atomic_set(&dev->power.pm_users,0);
down(&dpm_sem);
list_add_tail(&dev->power.entry,&dpm_active);
device_pm_set_parent(dev,dev->parent);
......
......@@ -31,7 +31,6 @@ extern struct semaphore dpm_sem;
* The PM lists.
*/
extern struct list_head dpm_active;
extern struct list_head dpm_suspended;
extern struct list_head dpm_off;
extern struct list_head dpm_off_irq;
......@@ -61,15 +60,12 @@ extern void dpm_sysfs_remove(struct device *);
*/
extern int dpm_resume(void);
extern void dpm_power_up(void);
extern void dpm_power_up_irq(void);
extern void power_up_device(struct device *);
extern int resume_device(struct device *);
/*
* suspend.c
*/
extern int suspend_device(struct device *, u32);
extern int power_down_device(struct device *, u32);
/*
......
This diff is collapsed.
......@@ -14,8 +14,6 @@ static void runtime_resume(struct device * dev)
{
if (!dev->power.power_state)
return;
power_up_device(dev);
resume_device(dev);
}
......@@ -55,19 +53,11 @@ int dpm_runtime_suspend(struct device * dev, u32 state)
if (dev->power.power_state)
dpm_runtime_resume(dev);
error = suspend_device(dev,state);
if (!error) {
error = power_down_device(dev,state);
if (error)
goto ErrResume;
if (!(error = suspend_device(dev,state)))
dev->power.power_state = state;
}
Done:
up(&dpm_sem);
return error;
ErrResume:
resume_device(dev);
goto Done;
}
......
This diff is collapsed.
This diff is collapsed.
......@@ -3330,10 +3330,6 @@ static ide_driver_t ide_cdrom_driver = {
.drives = LIST_HEAD_INIT(ide_cdrom_driver.drives),
.start_power_step = ide_cdrom_start_power_step,
.complete_power_step = ide_cdrom_complete_power_step,
.gen_driver = {
.suspend = generic_ide_suspend,
.resume = generic_ide_resume,
}
};
static int idecd_open(struct inode * inode, struct file * file)
......
......@@ -1732,10 +1732,6 @@ static ide_driver_t idedisk_driver = {
.drives = LIST_HEAD_INIT(idedisk_driver.drives),
.start_power_step = idedisk_start_power_step,
.complete_power_step = idedisk_complete_power_step,
.gen_driver = {
.suspend = generic_ide_suspend,
.resume = generic_ide_resume,
}
};
static int idedisk_open(struct inode *inode, struct file *filp)
......
......@@ -1534,16 +1534,13 @@ int ata_attach(ide_drive_t *drive)
EXPORT_SYMBOL(ata_attach);
int generic_ide_suspend(struct device *dev, u32 state, u32 level)
static int generic_ide_suspend(struct device *dev, u32 state)
{
ide_drive_t *drive = dev->driver_data;
struct request rq;
struct request_pm_state rqpm;
ide_task_t args;
if (level == dev->power_state || level != SUSPEND_SAVE_STATE)
return 0;
memset(&rq, 0, sizeof(rq));
memset(&rqpm, 0, sizeof(rqpm));
memset(&args, 0, sizeof(args));
......@@ -1556,18 +1553,13 @@ int generic_ide_suspend(struct device *dev, u32 state, u32 level)
return ide_do_drive_cmd(drive, &rq, ide_wait);
}
EXPORT_SYMBOL(generic_ide_suspend);
int generic_ide_resume(struct device *dev, u32 level)
static int generic_ide_resume(struct device *dev)
{
ide_drive_t *drive = dev->driver_data;
struct request rq;
struct request_pm_state rqpm;
ide_task_t args;
if (level == dev->power_state || level != RESUME_RESTORE_STATE)
return 0;
memset(&rq, 0, sizeof(rq));
memset(&rqpm, 0, sizeof(rqpm));
memset(&args, 0, sizeof(args));
......@@ -1580,8 +1572,6 @@ int generic_ide_resume(struct device *dev, u32 level)
return ide_do_drive_cmd(drive, &rq, ide_head_wait);
}
EXPORT_SYMBOL(generic_ide_resume);
int generic_ide_ioctl(struct block_device *bdev, unsigned int cmd,
unsigned long arg)
{
......@@ -2594,6 +2584,8 @@ EXPORT_SYMBOL(ide_probe);
struct bus_type ide_bus_type = {
.name = "ide",
.suspend = generic_ide_suspend,
.resume = generic_ide_resume,
};
/*
......
......@@ -308,8 +308,10 @@ static void add_us_sample(struct mm_struct * mm, struct op_sample * s)
cookie = lookup_dcookie(mm, s->eip, &offset);
if (!cookie)
if (!cookie) {
atomic_inc(&oprofile_stats.sample_lost_no_mapping);
return;
}
if (cookie != last_cookie) {
add_cookie_switch(cookie);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment