Commit 28ea8844 authored by Linus Torvalds's avatar Linus Torvalds

Merge bk://bk.arm.linux.org.uk/linux-2.5-rmk

into home.transmeta.com:/home/torvalds/v2.5/linux
parents 6ab95ee3 015744f5
...@@ -15,10 +15,12 @@ OR: they can now be DMA-aware. ...@@ -15,10 +15,12 @@ OR: they can now be DMA-aware.
manage dma mappings for existing dma-ready buffers (see below). manage dma mappings for existing dma-ready buffers (see below).
- URBs have an additional "transfer_dma" field, as well as a transfer_flags - URBs have an additional "transfer_dma" field, as well as a transfer_flags
bit saying if it's valid. (Control requests also needed "setup_dma".) bit saying if it's valid. (Control requests also have "setup_dma" and a
corresponding transfer_flags bit.)
- "usbcore" will map those DMA addresses, if a DMA-aware driver didn't do it - "usbcore" will map those DMA addresses, if a DMA-aware driver didn't do
first and set URB_NO_DMA_MAP. HCDs don't manage dma mappings for urbs. it first and set URB_NO_TRANSFER_DMA_MAP or URB_NO_SETUP_DMA_MAP. HCDs
don't manage dma mappings for URBs.
- There's a new "generic DMA API", parts of which are usable by USB device - There's a new "generic DMA API", parts of which are usable by USB device
drivers. Never use dma_set_mask() on any USB interface or device; that drivers. Never use dma_set_mask() on any USB interface or device; that
...@@ -33,8 +35,9 @@ and effects like cache-trashing can impose subtle penalties. ...@@ -33,8 +35,9 @@ and effects like cache-trashing can impose subtle penalties.
- When you're allocating a buffer for DMA purposes anyway, use the buffer - When you're allocating a buffer for DMA purposes anyway, use the buffer
primitives. Think of them as kmalloc and kfree that give you the right primitives. Think of them as kmalloc and kfree that give you the right
kind of addresses to store in urb->transfer_buffer and urb->transfer_dma, kind of addresses to store in urb->transfer_buffer and urb->transfer_dma,
while guaranteeing that hidden copies through DMA "bounce" buffers won't while guaranteeing that no hidden copies through DMA "bounce" buffers will
slow things down. You'd also set URB_NO_DMA_MAP in urb->transfer_flags: slow things down. You'd also set URB_NO_TRANSFER_DMA_MAP in
urb->transfer_flags:
void *usb_buffer_alloc (struct usb_device *dev, size_t size, void *usb_buffer_alloc (struct usb_device *dev, size_t size,
int mem_flags, dma_addr_t *dma); int mem_flags, dma_addr_t *dma);
...@@ -42,10 +45,18 @@ and effects like cache-trashing can impose subtle penalties. ...@@ -42,10 +45,18 @@ and effects like cache-trashing can impose subtle penalties.
void usb_buffer_free (struct usb_device *dev, size_t size, void usb_buffer_free (struct usb_device *dev, size_t size,
void *addr, dma_addr_t dma); void *addr, dma_addr_t dma);
For control transfers you can use the buffer primitives or not for each
of the transfer buffer and setup buffer independently. Set the flag bits
URB_NO_TRANSFER_DMA_MAP and URB_NO_SETUP_DMA_MAP to indicate which
buffers you have prepared. For non-control transfers URB_NO_SETUP_DMA_MAP
is ignored.
The memory buffer returned is "dma-coherent"; sometimes you might need to The memory buffer returned is "dma-coherent"; sometimes you might need to
force a consistent memory access ordering by using memory barriers. It's force a consistent memory access ordering by using memory barriers. It's
not using a streaming DMA mapping, so it's good for small transfers on not using a streaming DMA mapping, so it's good for small transfers on
systems where the I/O would otherwise tie up an IOMMU mapping. systems where the I/O would otherwise tie up an IOMMU mapping. (See
Documentation/DMA-mapping.txt for definitions of "coherent" and "streaming"
DMA mappings.)
Asking for 1/Nth of a page (as well as asking for N pages) is reasonably Asking for 1/Nth of a page (as well as asking for N pages) is reasonably
space-efficient. space-efficient.
...@@ -91,7 +102,8 @@ DMA address space of the device. ...@@ -91,7 +102,8 @@ DMA address space of the device.
These calls all work with initialized urbs: urb->dev, urb->pipe, These calls all work with initialized urbs: urb->dev, urb->pipe,
urb->transfer_buffer, and urb->transfer_buffer_length must all be urb->transfer_buffer, and urb->transfer_buffer_length must all be
valid when these calls are used: valid when these calls are used (urb->setup_packet must be valid too
if urb is a control request):
struct urb *usb_buffer_map (struct urb *urb); struct urb *usb_buffer_map (struct urb *urb);
...@@ -99,6 +111,6 @@ DMA address space of the device. ...@@ -99,6 +111,6 @@ DMA address space of the device.
void usb_buffer_unmap (struct urb *urb); void usb_buffer_unmap (struct urb *urb);
The calls manage urb->transfer_dma for you, and set URB_NO_DMA_MAP so that The calls manage urb->transfer_dma for you, and set URB_NO_TRANSFER_DMA_MAP
usbcore won't map or unmap the buffer. so that usbcore won't map or unmap the buffer. The same goes for
urb->setup_dma and URB_NO_SETUP_DMA_MAP for control requests.
...@@ -26,6 +26,10 @@ config RWSEM_XCHGADD_ALGORITHM ...@@ -26,6 +26,10 @@ config RWSEM_XCHGADD_ALGORITHM
bool bool
default y default y
config TIME_INTERPOLATION
bool
default y
choice choice
prompt "IA-64 processor type" prompt "IA-64 processor type"
default ITANIUM default ITANIUM
...@@ -63,7 +67,7 @@ config IA64_GENERIC ...@@ -63,7 +67,7 @@ config IA64_GENERIC
HP-simulator For the HP simulator HP-simulator For the HP simulator
(<http://software.hp.com/ia64linux/>). (<http://software.hp.com/ia64linux/>).
HP-zx1 For HP zx1-based systems. HP-zx1 For HP zx1-based systems.
SN1-simulator For the SGI SN1 simulator. SGI-SN2 For SGI Altix systems
DIG-compliant For DIG ("Developer's Interface Guide") compliant DIG-compliant For DIG ("Developer's Interface Guide") compliant
systems. systems.
...@@ -82,9 +86,6 @@ config IA64_HP_ZX1 ...@@ -82,9 +86,6 @@ config IA64_HP_ZX1
for the zx1 I/O MMU and makes root bus bridges appear in PCI config for the zx1 I/O MMU and makes root bus bridges appear in PCI config
space (required for zx1 agpgart support). space (required for zx1 agpgart support).
config IA64_SGI_SN1
bool "SGI-SN1"
config IA64_SGI_SN2 config IA64_SGI_SN2
bool "SGI-SN2" bool "SGI-SN2"
...@@ -190,8 +191,8 @@ config ITANIUM_BSTEP_SPECIFIC ...@@ -190,8 +191,8 @@ config ITANIUM_BSTEP_SPECIFIC
# align cache-sensitive data to 128 bytes # align cache-sensitive data to 128 bytes
config IA64_L1_CACHE_SHIFT config IA64_L1_CACHE_SHIFT
int int
default "7" if MCKINLEY || ITANIUM && IA64_SGI_SN1 default "7" if MCKINLEY
default "6" if ITANIUM && !IA64_SGI_SN1 default "6" if ITANIUM
# align cache-sensitive data to 64 bytes # align cache-sensitive data to 64 bytes
config MCKINLEY_ASTEP_SPECIFIC config MCKINLEY_ASTEP_SPECIFIC
...@@ -210,7 +211,7 @@ config MCKINLEY_A0_SPECIFIC ...@@ -210,7 +211,7 @@ config MCKINLEY_A0_SPECIFIC
config NUMA config NUMA
bool "Enable NUMA support" if IA64_GENERIC || IA64_DIG || IA64_HP_ZX1 bool "Enable NUMA support" if IA64_GENERIC || IA64_DIG || IA64_HP_ZX1
default y if IA64_SGI_SN1 || IA64_SGI_SN2 default y if IA64_SGI_SN2
help help
Say Y to compile the kernel to support NUMA (Non-Uniform Memory Say Y to compile the kernel to support NUMA (Non-Uniform Memory
Access). This option is for configuring high-end multiprocessor Access). This option is for configuring high-end multiprocessor
...@@ -234,7 +235,7 @@ endchoice ...@@ -234,7 +235,7 @@ endchoice
config DISCONTIGMEM config DISCONTIGMEM
bool bool
depends on IA64_SGI_SN1 || IA64_SGI_SN2 || (IA64_GENERIC || IA64_DIG || IA64_HP_ZX1) && NUMA depends on IA64_SGI_SN2 || (IA64_GENERIC || IA64_DIG || IA64_HP_ZX1) && NUMA
default y default y
help help
Say Y to support efficient handling of discontiguous physical memory, Say Y to support efficient handling of discontiguous physical memory,
...@@ -259,7 +260,7 @@ config VIRTUAL_MEM_MAP ...@@ -259,7 +260,7 @@ config VIRTUAL_MEM_MAP
config IA64_MCA config IA64_MCA
bool "Enable IA-64 Machine Check Abort" if IA64_GENERIC || IA64_DIG || IA64_HP_ZX1 bool "Enable IA-64 Machine Check Abort" if IA64_GENERIC || IA64_DIG || IA64_HP_ZX1
default y if IA64_SGI_SN1 || IA64_SGI_SN2 default y if IA64_SGI_SN2
help help
Say Y here to enable machine check support for IA-64. If you're Say Y here to enable machine check support for IA-64. If you're
unsure, answer Y. unsure, answer Y.
...@@ -288,17 +289,12 @@ config PM ...@@ -288,17 +289,12 @@ config PM
config IOSAPIC config IOSAPIC
bool bool
depends on IA64_GENERIC || IA64_DIG || IA64_HP_ZX1 depends on IA64_GENERIC || IA64_DIG || IA64_HP_ZX1 || IA64_SGI_SN2
default y
config IA64_SGI_SN
bool
depends on IA64_SGI_SN1 || IA64_SGI_SN2
default y default y
config IA64_SGI_SN_DEBUG config IA64_SGI_SN_DEBUG
bool "Enable extra debugging code" bool "Enable extra debugging code"
depends on IA64_SGI_SN1 || IA64_SGI_SN2 depends on IA64_SGI_SN2
help help
Turns on extra debugging code in the SGI SN (Scalable NUMA) platform Turns on extra debugging code in the SGI SN (Scalable NUMA) platform
for IA-64. Unless you are debugging problems on an SGI SN IA-64 box, for IA-64. Unless you are debugging problems on an SGI SN IA-64 box,
...@@ -306,14 +302,14 @@ config IA64_SGI_SN_DEBUG ...@@ -306,14 +302,14 @@ config IA64_SGI_SN_DEBUG
config IA64_SGI_SN_SIM config IA64_SGI_SN_SIM
bool "Enable SGI Medusa Simulator Support" bool "Enable SGI Medusa Simulator Support"
depends on IA64_SGI_SN1 || IA64_SGI_SN2 depends on IA64_SGI_SN2
help help
If you are compiling a kernel that will run under SGI's IA-64 If you are compiling a kernel that will run under SGI's IA-64
simulator (Medusa) then say Y, otherwise say N. simulator (Medusa) then say Y, otherwise say N.
config IA64_SGI_AUTOTEST config IA64_SGI_AUTOTEST
bool "Enable autotest (llsc). Option to run cache test instead of booting" bool "Enable autotest (llsc). Option to run cache test instead of booting"
depends on IA64_SGI_SN1 || IA64_SGI_SN2 depends on IA64_SGI_SN2
help help
Build a kernel used for hardware validation. If you include the Build a kernel used for hardware validation. If you include the
keyword "autotest" on the boot command line, the kernel does NOT boot. keyword "autotest" on the boot command line, the kernel does NOT boot.
...@@ -323,7 +319,7 @@ config IA64_SGI_AUTOTEST ...@@ -323,7 +319,7 @@ config IA64_SGI_AUTOTEST
config SERIAL_SGI_L1_PROTOCOL config SERIAL_SGI_L1_PROTOCOL
bool "Enable protocol mode for the L1 console" bool "Enable protocol mode for the L1 console"
depends on IA64_SGI_SN1 || IA64_SGI_SN2 depends on IA64_SGI_SN2
help help
Uses protocol mode instead of raw mode for the level 1 console on the Uses protocol mode instead of raw mode for the level 1 console on the
SGI SN (Scalable NUMA) platform for IA-64. If you are compiling for SGI SN (Scalable NUMA) platform for IA-64. If you are compiling for
...@@ -331,17 +327,9 @@ config SERIAL_SGI_L1_PROTOCOL ...@@ -331,17 +327,9 @@ config SERIAL_SGI_L1_PROTOCOL
config PERCPU_IRQ config PERCPU_IRQ
bool bool
depends on IA64_SGI_SN1 || IA64_SGI_SN2 depends on IA64_SGI_SN2
default y default y
config PCIBA
tristate "PCIBA support"
depends on IA64_SGI_SN1 || IA64_SGI_SN2
help
IRIX PCIBA-inspired user mode PCI interface for the SGI SN (Scalable
NUMA) platform for IA-64. Unless you are compiling a kernel for an
SGI SN IA-64 box, say N.
# On IA-64, we always want an ELF /proc/kcore. # On IA-64, we always want an ELF /proc/kcore.
config KCORE_ELF config KCORE_ELF
bool bool
...@@ -698,20 +686,18 @@ endmenu ...@@ -698,20 +686,18 @@ endmenu
source "drivers/usb/Kconfig" source "drivers/usb/Kconfig"
source "lib/Kconfig"
source "net/bluetooth/Kconfig" source "net/bluetooth/Kconfig"
endif endif
source "lib/Kconfig"
source "arch/ia64/hp/sim/Kconfig" source "arch/ia64/hp/sim/Kconfig"
menu "Kernel hacking" menu "Kernel hacking"
config FSYS
bool "Light-weight system-call support (via epc)"
choice choice
prompt "Physical memory granularity" prompt "Physical memory granularity"
default IA64_GRANULE_64MB default IA64_GRANULE_64MB
...@@ -778,7 +764,7 @@ config MAGIC_SYSRQ ...@@ -778,7 +764,7 @@ config MAGIC_SYSRQ
config IA64_EARLY_PRINTK config IA64_EARLY_PRINTK
bool "Early printk support" bool "Early printk support"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL && !IA64_GENERIC
help help
Selecting this option uses the VGA screen or serial console for Selecting this option uses the VGA screen or serial console for
printk() output before the consoles are initialised. It is useful printk() output before the consoles are initialised. It is useful
......
...@@ -18,8 +18,8 @@ LDFLAGS_MODULE += -T arch/ia64/module.lds ...@@ -18,8 +18,8 @@ LDFLAGS_MODULE += -T arch/ia64/module.lds
AFLAGS_KERNEL := -mconstant-gp AFLAGS_KERNEL := -mconstant-gp
EXTRA := EXTRA :=
cflags-y := -pipe $(EXTRA) -ffixed-r13 -mfixed-range=f10-f15,f32-f127 \ cflags-y := -pipe $(EXTRA) -ffixed-r13 -mfixed-range=f12-f15,f32-f127 \
-falign-functions=32 -falign-functions=32 -frename-registers
CFLAGS_KERNEL := -mconstant-gp CFLAGS_KERNEL := -mconstant-gp
GCC_VERSION=$(shell $(CC) -v 2>&1 | fgrep 'gcc version' | cut -f3 -d' ' | cut -f1 -d'.') GCC_VERSION=$(shell $(CC) -v 2>&1 | fgrep 'gcc version' | cut -f3 -d' ' | cut -f1 -d'.')
...@@ -27,6 +27,10 @@ GCC_MINOR_VERSION=$(shell $(CC) -v 2>&1 | fgrep 'gcc version' | cut -f3 -d' ' | ...@@ -27,6 +27,10 @@ GCC_MINOR_VERSION=$(shell $(CC) -v 2>&1 | fgrep 'gcc version' | cut -f3 -d' ' |
GAS_STATUS=$(shell arch/ia64/scripts/check-gas $(CC) $(OBJDUMP)) GAS_STATUS=$(shell arch/ia64/scripts/check-gas $(CC) $(OBJDUMP))
arch-cppflags := $(shell arch/ia64/scripts/toolchain-flags $(CC) $(LD) $(OBJDUMP))
cflags-y += $(arch-cppflags)
AFLAGS += $(arch-cppflags)
ifeq ($(GAS_STATUS),buggy) ifeq ($(GAS_STATUS),buggy)
$(error Sorry, you need a newer version of the assember, one that is built from \ $(error Sorry, you need a newer version of the assember, one that is built from \
a source-tree that post-dates 18-Dec-2002. You can find a pre-compiled \ a source-tree that post-dates 18-Dec-2002. You can find a pre-compiled \
...@@ -35,19 +39,18 @@ $(error Sorry, you need a newer version of the assember, one that is built from ...@@ -35,19 +39,18 @@ $(error Sorry, you need a newer version of the assember, one that is built from
ftp://ftp.hpl.hp.com/pub/linux-ia64/gas-030124.tar.gz) ftp://ftp.hpl.hp.com/pub/linux-ia64/gas-030124.tar.gz)
endif endif
ifneq ($(GCC_VERSION),2) ifeq ($(GCC_VERSION),2)
cflags-$(CONFIG_ITANIUM) += -frename-registers $(error Sorry, your compiler is too old. GCC v2.96 is known to generate bad code.)
endif endif
ifeq ($(GCC_VERSION),3) ifeq ($(GCC_VERSION),3)
ifeq ($(GCC_MINOR_VERSION),4) ifeq ($(GCC_MINOR_VERSION),4)
cflags-$(CONFIG_ITANIUM) += -mtune=merced cflags-$(CONFIG_ITANIUM) += -mtune=merced
cflags-$(CONFIG_MCKINLEY) += -mtune=mckinley cflags-$(CONFIG_MCKINLEY) += -mtune=mckinley
endif endif
endif endif
cflags-$(CONFIG_ITANIUM_BSTEP_SPECIFIC) += -mb-step cflags-$(CONFIG_ITANIUM_BSTEP_SPECIFIC) += -mb-step
cflags-$(CONFIG_IA64_SGI_SN) += -DBRINGUP
CFLAGS += $(cflags-y) CFLAGS += $(cflags-y)
head-y := arch/ia64/kernel/head.o arch/ia64/kernel/init_task.o head-y := arch/ia64/kernel/head.o arch/ia64/kernel/init_task.o
...@@ -58,7 +61,7 @@ core-$(CONFIG_IA32_SUPPORT) += arch/ia64/ia32/ ...@@ -58,7 +61,7 @@ core-$(CONFIG_IA32_SUPPORT) += arch/ia64/ia32/
core-$(CONFIG_IA64_DIG) += arch/ia64/dig/ core-$(CONFIG_IA64_DIG) += arch/ia64/dig/
core-$(CONFIG_IA64_GENERIC) += arch/ia64/dig/ core-$(CONFIG_IA64_GENERIC) += arch/ia64/dig/
core-$(CONFIG_IA64_HP_ZX1) += arch/ia64/dig/ core-$(CONFIG_IA64_HP_ZX1) += arch/ia64/dig/
core-$(CONFIG_IA64_SGI_SN) += arch/ia64/sn/ core-$(CONFIG_IA64_SGI_SN2) += arch/ia64/sn/
drivers-$(CONFIG_PCI) += arch/ia64/pci/ drivers-$(CONFIG_PCI) += arch/ia64/pci/
drivers-$(CONFIG_IA64_HP_SIM) += arch/ia64/hp/sim/ drivers-$(CONFIG_IA64_HP_SIM) += arch/ia64/hp/sim/
...@@ -66,33 +69,37 @@ drivers-$(CONFIG_IA64_HP_ZX1) += arch/ia64/hp/common/ arch/ia64/hp/zx1/ ...@@ -66,33 +69,37 @@ drivers-$(CONFIG_IA64_HP_ZX1) += arch/ia64/hp/common/ arch/ia64/hp/zx1/
drivers-$(CONFIG_IA64_GENERIC) += arch/ia64/hp/common/ arch/ia64/hp/zx1/ arch/ia64/hp/sim/ drivers-$(CONFIG_IA64_GENERIC) += arch/ia64/hp/common/ arch/ia64/hp/zx1/ arch/ia64/hp/sim/
boot := arch/ia64/boot boot := arch/ia64/boot
tools := arch/ia64/tools
.PHONY: boot compressed include/asm-ia64/offsets.h
all: prepare vmlinux .PHONY: boot compressed check
compressed: vmlinux.gz compressed: vmlinux.gz
vmlinux.gz: vmlinux vmlinux.gz: vmlinux
$(Q)$(MAKE) $(build)=$(boot) vmlinux.gz $(Q)$(MAKE) $(build)=$(boot) $@
check: vmlinux check: vmlinux
arch/ia64/scripts/unwcheck.sh vmlinux arch/ia64/scripts/unwcheck.sh $<
archclean: archclean:
$(Q)$(MAKE) $(clean)=$(boot) $(Q)$(MAKE) $(clean)=$(boot)
$(Q)$(MAKE) $(clean)=$(tools)
CLEAN_FILES += include/asm-ia64/offsets.h vmlinux.gz bootloader CLEAN_FILES += include/asm-ia64/.offsets.h.stamp include/asm-ia64/offsets.h vmlinux.gz bootloader
prepare: include/asm-ia64/offsets.h prepare: include/asm-ia64/offsets.h
include/asm-$(ARCH)/offsets.h: arch/$(ARCH)/kernel/asm-offsets.s
$(call filechk,gen-asm-offsets)
arch/ia64/kernel/asm-offsets.s: include/asm-ia64/.offsets.h.stamp
include/asm-ia64/.offsets.h.stamp:
[ -s include/asm-ia64/offsets.h ] \
|| echo "#define IA64_TASK_SIZE 0" > include/asm-ia64/offsets.h
touch $@
boot: lib/lib.a vmlinux boot: lib/lib.a vmlinux
$(Q)$(MAKE) $(build)=$(boot) $@ $(Q)$(MAKE) $(build)=$(boot) $@
include/asm-ia64/offsets.h: include/asm include/linux/version.h include/config/MARKER
$(Q)$(MAKE) $(build)=$(tools) $@
define archhelp define archhelp
echo ' compressed - Build compressed kernel image' echo ' compressed - Build compressed kernel image'
......
...@@ -55,6 +55,9 @@ struct disk_stat { ...@@ -55,6 +55,9 @@ struct disk_stat {
#include "../kernel/fw-emu.c" #include "../kernel/fw-emu.c"
/* This needs to be defined because lib/string.c:strlcat() calls it in case of error... */
asm (".global printk; printk = 0");
/* /*
* Set a break point on this function so that symbols are available to set breakpoints in * Set a break point on this function so that symbols are available to set breakpoints in
* the kernel being debugged. * the kernel being debugged.
...@@ -181,10 +184,10 @@ _start (void) ...@@ -181,10 +184,10 @@ _start (void)
continue; continue;
req.len = elf_phdr->p_filesz; req.len = elf_phdr->p_filesz;
req.addr = __pa(elf_phdr->p_vaddr); req.addr = __pa(elf_phdr->p_paddr);
ssc(fd, 1, (long) &req, elf_phdr->p_offset, SSC_READ); ssc(fd, 1, (long) &req, elf_phdr->p_offset, SSC_READ);
ssc((long) &stat, 0, 0, 0, SSC_WAIT_COMPLETION); ssc((long) &stat, 0, 0, 0, SSC_WAIT_COMPLETION);
memset((char *)__pa(elf_phdr->p_vaddr) + elf_phdr->p_filesz, 0, memset((char *)__pa(elf_phdr->p_paddr) + elf_phdr->p_filesz, 0,
elf_phdr->p_memsz - elf_phdr->p_filesz); elf_phdr->p_memsz - elf_phdr->p_filesz);
} }
ssc(fd, 0, 0, 0, SSC_CLOSE); ssc(fd, 0, 0, 0, SSC_CLOSE);
......
...@@ -1682,6 +1682,10 @@ ioc_init(u64 hpa, void *handle) ...@@ -1682,6 +1682,10 @@ ioc_init(u64 hpa, void *handle)
ioc_resource_init(ioc); ioc_resource_init(ioc);
ioc_sac_init(ioc); ioc_sac_init(ioc);
if ((long) ~IOVP_MASK > (long) ia64_max_iommu_merge_mask)
ia64_max_iommu_merge_mask = ~IOVP_MASK;
MAX_DMA_ADDRESS = ~0UL;
printk(KERN_INFO PFX printk(KERN_INFO PFX
"%s %d.%d HPA 0x%lx IOVA space %dMb at 0x%lx\n", "%s %d.%d HPA 0x%lx IOVA space %dMb at 0x%lx\n",
ioc->name, (ioc->rev >> 4) & 0xF, ioc->rev & 0xF, ioc->name, (ioc->rev >> 4) & 0xF, ioc->rev & 0xF,
...@@ -1898,22 +1902,26 @@ acpi_sba_ioc_add(struct acpi_device *device) ...@@ -1898,22 +1902,26 @@ acpi_sba_ioc_add(struct acpi_device *device)
struct ioc *ioc; struct ioc *ioc;
acpi_status status; acpi_status status;
u64 hpa, length; u64 hpa, length;
struct acpi_device_info dev_info; struct acpi_buffer buffer;
struct acpi_device_info *dev_info;
status = hp_acpi_csr_space(device->handle, &hpa, &length); status = hp_acpi_csr_space(device->handle, &hpa, &length);
if (ACPI_FAILURE(status)) if (ACPI_FAILURE(status))
return 1; return 1;
status = acpi_get_object_info(device->handle, &dev_info); buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER;
status = acpi_get_object_info(device->handle, &buffer);
if (ACPI_FAILURE(status)) if (ACPI_FAILURE(status))
return 1; return 1;
dev_info = buffer.pointer;
/* /*
* For HWP0001, only SBA appears in ACPI namespace. It encloses the PCI * For HWP0001, only SBA appears in ACPI namespace. It encloses the PCI
* root bridges, and its CSR space includes the IOC function. * root bridges, and its CSR space includes the IOC function.
*/ */
if (strncmp("HWP0001", dev_info.hardware_id, 7) == 0) if (strncmp("HWP0001", dev_info->hardware_id.value, 7) == 0)
hpa += ZX1_IOC_OFFSET; hpa += ZX1_IOC_OFFSET;
ACPI_MEM_FREE(dev_info);
ioc = ioc_init(hpa, device->handle); ioc = ioc_init(hpa, device->handle);
if (!ioc) if (!ioc)
...@@ -1933,8 +1941,6 @@ static struct acpi_driver acpi_sba_ioc_driver = { ...@@ -1933,8 +1941,6 @@ static struct acpi_driver acpi_sba_ioc_driver = {
static int __init static int __init
sba_init(void) sba_init(void)
{ {
MAX_DMA_ADDRESS = ~0UL;
acpi_bus_register_driver(&acpi_sba_ioc_driver); acpi_bus_register_driver(&acpi_sba_ioc_driver);
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
......
...@@ -8,6 +8,10 @@ config HP_SIMETH ...@@ -8,6 +8,10 @@ config HP_SIMETH
config HP_SIMSERIAL config HP_SIMSERIAL
bool "Simulated serial driver support" bool "Simulated serial driver support"
config HP_SIMSERIAL_CONSOLE
bool "Console for HP simulator"
depends on HP_SIMSERIAL
config HP_SIMSCSI config HP_SIMSCSI
bool "Simulated SCSI disk" bool "Simulated SCSI disk"
depends on SCSI depends on SCSI
......
...@@ -7,9 +7,10 @@ ...@@ -7,9 +7,10 @@
# Copyright (C) Srinivasa Thirumalachar (sprasad@engr.sgi.com) # Copyright (C) Srinivasa Thirumalachar (sprasad@engr.sgi.com)
# #
obj-y := hpsim_console.o hpsim_irq.o hpsim_setup.o obj-y := hpsim_irq.o hpsim_setup.o
obj-$(CONFIG_IA64_GENERIC) += hpsim_machvec.o obj-$(CONFIG_IA64_GENERIC) += hpsim_machvec.o
obj-$(CONFIG_HP_SIMETH) += simeth.o obj-$(CONFIG_HP_SIMETH) += simeth.o
obj-$(CONFIG_HP_SIMSERIAL) += simserial.o obj-$(CONFIG_HP_SIMSERIAL) += simserial.o
obj-$(CONFIG_HP_SIMSERIAL_CONSOLE) += hpsim_console.o
obj-$(CONFIG_HP_SIMSCSI) += simscsi.o obj-$(CONFIG_HP_SIMSCSI) += simscsi.o
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
* David Mosberger-Tang <davidm@hpl.hp.com> * David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999 Vijay Chander <vijay@engr.sgi.com> * Copyright (C) 1999 Vijay Chander <vijay@engr.sgi.com>
*/ */
#include <linux/config.h>
#include <linux/console.h> #include <linux/console.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/kdev_t.h> #include <linux/kdev_t.h>
...@@ -24,8 +25,6 @@ ...@@ -24,8 +25,6 @@
#include "hpsim_ssc.h" #include "hpsim_ssc.h"
extern struct console hpsim_cons;
/* /*
* Simulator system call. * Simulator system call.
*/ */
...@@ -56,5 +55,11 @@ hpsim_setup (char **cmdline_p) ...@@ -56,5 +55,11 @@ hpsim_setup (char **cmdline_p)
{ {
ROOT_DEV = Root_SDA1; /* default to first SCSI drive */ ROOT_DEV = Root_SDA1; /* default to first SCSI drive */
register_console(&hpsim_cons); #ifdef CONFIG_HP_SIMSERIAL_CONSOLE
{
extern struct console hpsim_cons;
if (ia64_platform_is("hpsim"))
register_console(&hpsim_cons);
}
#endif
} }
...@@ -1031,6 +1031,9 @@ simrs_init (void) ...@@ -1031,6 +1031,9 @@ simrs_init (void)
int i; int i;
struct serial_state *state; struct serial_state *state;
if (!ia64_platform_is("hpsim"))
return -ENODEV;
hp_simserial_driver = alloc_tty_driver(1); hp_simserial_driver = alloc_tty_driver(1);
if (!hp_simserial_driver) if (!hp_simserial_driver)
return -ENOMEM; return -ENOMEM;
......
...@@ -16,7 +16,8 @@ ...@@ -16,7 +16,8 @@
#include <asm/param.h> #include <asm/param.h>
#include <asm/signal.h> #include <asm/signal.h>
#include <asm/ia32.h>
#include "ia32priv.h"
#define CONFIG_BINFMT_ELF32 #define CONFIG_BINFMT_ELF32
......
...@@ -44,14 +44,8 @@ ENTRY(ia32_clone) ...@@ -44,14 +44,8 @@ ENTRY(ia32_clone)
br.call.sptk.many rp=do_fork br.call.sptk.many rp=do_fork
.ret0: .restore sp .ret0: .restore sp
adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
mov r2=-1000
adds r3=IA64_TASK_PID_OFFSET,r8
;;
cmp.leu p6,p0=r8,r2
mov ar.pfs=loc1 mov ar.pfs=loc1
mov rp=loc0 mov rp=loc0
;;
(p6) ld4 r8=[r3]
br.ret.sptk.many rp br.ret.sptk.many rp
END(ia32_clone) END(ia32_clone)
...@@ -183,14 +177,8 @@ GLOBAL_ENTRY(sys32_fork) ...@@ -183,14 +177,8 @@ GLOBAL_ENTRY(sys32_fork)
br.call.sptk.few rp=do_fork br.call.sptk.few rp=do_fork
.ret5: .restore sp .ret5: .restore sp
adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack
mov r2=-1000
adds r3=IA64_TASK_PID_OFFSET,r8
;;
cmp.leu p6,p0=r8,r2
mov ar.pfs=loc1 mov ar.pfs=loc1
mov rp=loc0 mov rp=loc0
;;
(p6) ld4 r8=[r3]
br.ret.sptk.many rp br.ret.sptk.many rp
END(sys32_fork) END(sys32_fork)
...@@ -439,8 +427,8 @@ ia32_syscall_table: ...@@ -439,8 +427,8 @@ ia32_syscall_table:
data8 sys_ni_syscall data8 sys_ni_syscall
data8 sys_ni_syscall data8 sys_ni_syscall
data8 compat_sys_futex /* 240 */ data8 compat_sys_futex /* 240 */
data8 compat_sys_setaffinity data8 compat_sys_sched_setaffinity
data8 compat_sys_getaffinity data8 compat_sys_sched_getaffinity
data8 sys_ni_syscall data8 sys_ni_syscall
data8 sys_ni_syscall data8 sys_ni_syscall
data8 sys_ni_syscall /* 245 */ data8 sys_ni_syscall /* 245 */
......
...@@ -11,36 +11,107 @@ ...@@ -11,36 +11,107 @@
#include <linux/dirent.h> #include <linux/dirent.h>
#include <linux/fs.h> /* argh, msdos_fs.h isn't self-contained... */ #include <linux/fs.h> /* argh, msdos_fs.h isn't self-contained... */
#include <linux/signal.h> /* argh, msdos_fs.h isn't self-contained... */ #include <linux/signal.h> /* argh, msdos_fs.h isn't self-contained... */
#include <linux/compat.h>
#include <asm/ia32.h>
#include "ia32priv.h"
#include <linux/msdos_fs.h>
#include <linux/mtio.h> #include <linux/config.h>
#include <linux/ncp_fs.h> #include <linux/kernel.h>
#include <linux/capi.h> #include <linux/sched.h>
#include <linux/videodev.h> #include <linux/smp.h>
#include <linux/synclink.h> #include <linux/smp_lock.h>
#include <linux/atmdev.h> #include <linux/ioctl.h>
#include <linux/atm_eni.h> #include <linux/if.h>
#include <linux/atm_nicstar.h> #include <linux/slab.h>
#include <linux/atm_zatm.h> #include <linux/hdreg.h>
#include <linux/atm_idt77105.h> #include <linux/raid/md.h>
#include <linux/kd.h>
#include <linux/route.h>
#include <linux/in6.h>
#include <linux/ipv6_route.h>
#include <linux/skbuff.h>
#include <linux/netlink.h>
#include <linux/vt.h>
#include <linux/file.h>
#include <linux/fd.h>
#include <linux/ppp_defs.h> #include <linux/ppp_defs.h>
#include <linux/if_ppp.h> #include <linux/if_ppp.h>
#include <linux/ixjuser.h> #include <linux/if_pppox.h>
#include <linux/i2o-dev.h> #include <linux/mtio.h>
#include <linux/cdrom.h>
#include <linux/loop.h>
#include <linux/auto_fs.h>
#include <linux/auto_fs4.h>
#include <linux/devfs_fs.h>
#include <linux/tty.h>
#include <linux/vt_kern.h>
#include <linux/fb.h>
#include <linux/ext2_fs.h>
#include <linux/videodev.h>
#include <linux/netdevice.h>
#include <linux/raw.h>
#include <linux/smb_fs.h>
#include <linux/blkpg.h>
#include <linux/blk.h>
#include <linux/elevator.h>
#include <linux/rtc.h>
#include <linux/pci.h>
#include <linux/rtc.h>
#include <linux/module.h>
#include <linux/serial.h>
#include <linux/reiserfs_fs.h>
#include <linux/if_tun.h>
#include <linux/dirent.h>
#include <linux/ctype.h>
#include <linux/ncp_fs.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/rfcomm.h>
#include <scsi/scsi.h> #include <scsi/scsi.h>
/* Ugly hack. */ /* Ugly hack. */
#undef __KERNEL__ #undef __KERNEL__
#include <scsi/scsi_ioctl.h> #include <scsi/scsi_ioctl.h>
#define __KERNEL__ #define __KERNEL__
#include <scsi/sg.h> #include <scsi/sg.h>
#include <asm/types.h>
#include <asm/uaccess.h>
#include <linux/ethtool.h>
#include <linux/mii.h>
#include <linux/if_bonding.h>
#include <linux/watchdog.h>
#include <asm/module.h>
#include <asm/ioctl32.h>
#include <linux/soundcard.h>
#include <linux/lp.h>
#include <linux/atm.h>
#include <linux/atmarp.h>
#include <linux/atmclip.h>
#include <linux/atmdev.h>
#include <linux/atmioc.h>
#include <linux/atmlec.h>
#include <linux/atmmpc.h>
#include <linux/atmsvc.h>
#include <linux/atm_tcp.h>
#include <linux/sonet.h>
#include <linux/atm_suni.h>
#include <linux/mtd/mtd.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci.h>
#include <linux/usb.h>
#include <linux/usbdevice_fs.h>
#include <linux/nbd.h>
#include <linux/random.h>
#include <linux/filter.h>
#include <../drivers/char/drm/drm.h> #include <../drivers/char/drm/drm.h>
#include <../drivers/char/drm/mga_drm.h> #include <../drivers/char/drm/mga_drm.h>
#include <../drivers/char/drm/i810_drm.h> #include <../drivers/char/drm/i810_drm.h>
#define IOCTL_NR(a) ((a) & ~(_IOC_SIZEMASK << _IOC_SIZESHIFT)) #define IOCTL_NR(a) ((a) & ~(_IOC_SIZEMASK << _IOC_SIZESHIFT))
#define DO_IOCTL(fd, cmd, arg) ({ \ #define DO_IOCTL(fd, cmd, arg) ({ \
...@@ -57,6 +128,9 @@ ...@@ -57,6 +128,9 @@
asmlinkage long sys_ioctl(unsigned int fd, unsigned int cmd, unsigned long arg); asmlinkage long sys_ioctl(unsigned int fd, unsigned int cmd, unsigned long arg);
#define VFAT_IOCTL_READDIR_BOTH32 _IOR('r', 1, struct linux32_dirent[2])
#define VFAT_IOCTL_READDIR_SHORT32 _IOR('r', 2, struct linux32_dirent[2])
static long static long
put_dirent32 (struct dirent *d, struct linux32_dirent *d32) put_dirent32 (struct dirent *d, struct linux32_dirent *d32)
{ {
...@@ -67,6 +141,23 @@ put_dirent32 (struct dirent *d, struct linux32_dirent *d32) ...@@ -67,6 +141,23 @@ put_dirent32 (struct dirent *d, struct linux32_dirent *d32)
|| put_user(d->d_reclen, &d32->d_reclen) || put_user(d->d_reclen, &d32->d_reclen)
|| copy_to_user(d32->d_name, d->d_name, namelen + 1)); || copy_to_user(d32->d_name, d->d_name, namelen + 1));
} }
static int vfat_ioctl32(unsigned fd, unsigned cmd, void *ptr)
{
int ret;
mm_segment_t oldfs = get_fs();
struct dirent d[2];
set_fs(KERNEL_DS);
ret = sys_ioctl(fd,cmd,(unsigned long)&d);
set_fs(oldfs);
if (!ret) {
ret |= put_dirent32(&d[0], (struct linux32_dirent *)ptr);
ret |= put_dirent32(&d[1], ((struct linux32_dirent *)ptr) + 1);
}
return ret;
}
/* /*
* The transform code for the SG_IO ioctl was brazenly lifted from * The transform code for the SG_IO ioctl was brazenly lifted from
* the Sparc64 port in the file `arch/sparc64/kernel/ioctl32.c'. * the Sparc64 port in the file `arch/sparc64/kernel/ioctl32.c'.
...@@ -294,3 +385,83 @@ static int sg_ioctl_trans(unsigned int fd, unsigned int cmd, unsigned long arg) ...@@ -294,3 +385,83 @@ static int sg_ioctl_trans(unsigned int fd, unsigned int cmd, unsigned long arg)
} }
return err; return err;
} }
static __inline__ void *alloc_user_space(long len)
{
struct pt_regs *regs = ((struct pt_regs *)((unsigned long) current +
IA64_STK_OFFSET)) - 1;
return (void *)regs->r12 - len;
}
struct ifmap32 {
u32 mem_start;
u32 mem_end;
unsigned short base_addr;
unsigned char irq;
unsigned char dma;
unsigned char port;
};
struct ifreq32 {
#define IFHWADDRLEN 6
#define IFNAMSIZ 16
union {
char ifrn_name[IFNAMSIZ]; /* if name, e.g. "en0" */
} ifr_ifrn;
union {
struct sockaddr ifru_addr;
struct sockaddr ifru_dstaddr;
struct sockaddr ifru_broadaddr;
struct sockaddr ifru_netmask;
struct sockaddr ifru_hwaddr;
short ifru_flags;
int ifru_ivalue;
int ifru_mtu;
struct ifmap32 ifru_map;
char ifru_slave[IFNAMSIZ]; /* Just fits the size */
char ifru_newname[IFNAMSIZ];
compat_caddr_t ifru_data;
} ifr_ifru;
};
int siocdevprivate_ioctl(unsigned int fd, unsigned int cmd, unsigned long arg)
{
struct ifreq *u_ifreq64;
struct ifreq32 *u_ifreq32 = (struct ifreq32 *) arg;
char tmp_buf[IFNAMSIZ];
void *data64;
u32 data32;
if (copy_from_user(&tmp_buf[0], &(u_ifreq32->ifr_ifrn.ifrn_name[0]),
IFNAMSIZ))
return -EFAULT;
if (__get_user(data32, &u_ifreq32->ifr_ifru.ifru_data))
return -EFAULT;
data64 = (void *) P(data32);
u_ifreq64 = alloc_user_space(sizeof(*u_ifreq64));
/* Don't check these user accesses, just let that get trapped
* in the ioctl handler instead.
*/
copy_to_user(&u_ifreq64->ifr_ifrn.ifrn_name[0], &tmp_buf[0], IFNAMSIZ);
__put_user(data64, &u_ifreq64->ifr_ifru.ifru_data);
return sys_ioctl(fd, cmd, (unsigned long) u_ifreq64);
}
typedef int (* ioctl32_handler_t)(unsigned int, unsigned int, unsigned long, struct file *);
#define COMPATIBLE_IOCTL(cmd) HANDLE_IOCTL((cmd),sys_ioctl)
#define HANDLE_IOCTL(cmd,handler) { (cmd), (ioctl32_handler_t)(handler), NULL },
#define IOCTL_TABLE_START \
struct ioctl_trans ioctl_start[] = {
#define IOCTL_TABLE_END \
}; struct ioctl_trans ioctl_end[0];
IOCTL_TABLE_START
#include <linux/compat_ioctl.h>
HANDLE_IOCTL(VFAT_IOCTL_READDIR_BOTH32, vfat_ioctl32)
HANDLE_IOCTL(VFAT_IOCTL_READDIR_SHORT32, vfat_ioctl32)
HANDLE_IOCTL(SG_IO,sg_ioctl_trans)
IOCTL_TABLE_END
...@@ -14,7 +14,8 @@ ...@@ -14,7 +14,8 @@
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/ia32.h>
#include "ia32priv.h"
#define P(p) ((void *) (unsigned long) (p)) #define P(p) ((void *) (unsigned long) (p))
......
...@@ -28,7 +28,8 @@ ...@@ -28,7 +28,8 @@
#include <asm/rse.h> #include <asm/rse.h>
#include <asm/sigcontext.h> #include <asm/sigcontext.h>
#include <asm/segment.h> #include <asm/segment.h>
#include <asm/ia32.h>
#include "ia32priv.h"
#include "../kernel/sigframe.h" #include "../kernel/sigframe.h"
...@@ -179,8 +180,10 @@ copy_siginfo_to_user32 (siginfo_t32 *to, siginfo_t *from) ...@@ -179,8 +180,10 @@ copy_siginfo_to_user32 (siginfo_t32 *to, siginfo_t *from)
* datasel ar.fdr(32:47) * datasel ar.fdr(32:47)
* *
* _st[(0+TOS)%8] f8 * _st[(0+TOS)%8] f8
* _st[(1+TOS)%8] f9 (f8, f9 from ptregs) * _st[(1+TOS)%8] f9
* : : : (f10..f15 from live reg) * _st[(2+TOS)%8] f10
* _st[(3+TOS)%8] f11 (f8..f11 from ptregs)
* : : : (f12..f15 from live reg)
* : : : * : : :
* _st[(7+TOS)%8] f15 TOS=sw.top(bits11:13) * _st[(7+TOS)%8] f15 TOS=sw.top(bits11:13)
* *
...@@ -262,8 +265,8 @@ save_ia32_fpstate_live (struct _fpstate_ia32 *save) ...@@ -262,8 +265,8 @@ save_ia32_fpstate_live (struct _fpstate_ia32 *save)
__put_user( 0, &save->magic); //#define X86_FXSR_MAGIC 0x0000 __put_user( 0, &save->magic); //#define X86_FXSR_MAGIC 0x0000
/* /*
* save f8 and f9 from pt_regs * save f8..f11 from pt_regs
* save f10..f15 from live register set * save f12..f15 from live register set
*/ */
/* /*
* Find the location where f8 has to go in fp reg stack. This depends on * Find the location where f8 has to go in fp reg stack. This depends on
...@@ -278,11 +281,11 @@ save_ia32_fpstate_live (struct _fpstate_ia32 *save) ...@@ -278,11 +281,11 @@ save_ia32_fpstate_live (struct _fpstate_ia32 *save)
copy_to_user(&save->_st[(0+fr8_st_map)&0x7], fpregp, sizeof(struct _fpreg_ia32)); copy_to_user(&save->_st[(0+fr8_st_map)&0x7], fpregp, sizeof(struct _fpreg_ia32));
ia64f2ia32f(fpregp, &ptp->f9); ia64f2ia32f(fpregp, &ptp->f9);
copy_to_user(&save->_st[(1+fr8_st_map)&0x7], fpregp, sizeof(struct _fpreg_ia32)); copy_to_user(&save->_st[(1+fr8_st_map)&0x7], fpregp, sizeof(struct _fpreg_ia32));
ia64f2ia32f(fpregp, &ptp->f10);
__stfe(fpregp, 10);
copy_to_user(&save->_st[(2+fr8_st_map)&0x7], fpregp, sizeof(struct _fpreg_ia32)); copy_to_user(&save->_st[(2+fr8_st_map)&0x7], fpregp, sizeof(struct _fpreg_ia32));
__stfe(fpregp, 11); ia64f2ia32f(fpregp, &ptp->f11);
copy_to_user(&save->_st[(3+fr8_st_map)&0x7], fpregp, sizeof(struct _fpreg_ia32)); copy_to_user(&save->_st[(3+fr8_st_map)&0x7], fpregp, sizeof(struct _fpreg_ia32));
__stfe(fpregp, 12); __stfe(fpregp, 12);
copy_to_user(&save->_st[(4+fr8_st_map)&0x7], fpregp, sizeof(struct _fpreg_ia32)); copy_to_user(&save->_st[(4+fr8_st_map)&0x7], fpregp, sizeof(struct _fpreg_ia32));
__stfe(fpregp, 13); __stfe(fpregp, 13);
...@@ -394,8 +397,8 @@ restore_ia32_fpstate_live (struct _fpstate_ia32 *save) ...@@ -394,8 +397,8 @@ restore_ia32_fpstate_live (struct _fpstate_ia32 *save)
asm volatile ( "mov ar.fdr=%0;" :: "r"(fdr)); asm volatile ( "mov ar.fdr=%0;" :: "r"(fdr));
/* /*
* restore f8, f9 onto pt_regs * restore f8..f11 onto pt_regs
* restore f10..f15 onto live registers * restore f12..f15 onto live registers
*/ */
/* /*
* Find the location where f8 has to go in fp reg stack. This depends on * Find the location where f8 has to go in fp reg stack. This depends on
...@@ -411,11 +414,11 @@ restore_ia32_fpstate_live (struct _fpstate_ia32 *save) ...@@ -411,11 +414,11 @@ restore_ia32_fpstate_live (struct _fpstate_ia32 *save)
ia32f2ia64f(&ptp->f8, fpregp); ia32f2ia64f(&ptp->f8, fpregp);
copy_from_user(fpregp, &save->_st[(1+fr8_st_map)&0x7], sizeof(struct _fpreg_ia32)); copy_from_user(fpregp, &save->_st[(1+fr8_st_map)&0x7], sizeof(struct _fpreg_ia32));
ia32f2ia64f(&ptp->f9, fpregp); ia32f2ia64f(&ptp->f9, fpregp);
copy_from_user(fpregp, &save->_st[(2+fr8_st_map)&0x7], sizeof(struct _fpreg_ia32)); copy_from_user(fpregp, &save->_st[(2+fr8_st_map)&0x7], sizeof(struct _fpreg_ia32));
__ldfe(10, fpregp); ia32f2ia64f(&ptp->f10, fpregp);
copy_from_user(fpregp, &save->_st[(3+fr8_st_map)&0x7], sizeof(struct _fpreg_ia32)); copy_from_user(fpregp, &save->_st[(3+fr8_st_map)&0x7], sizeof(struct _fpreg_ia32));
__ldfe(11, fpregp); ia32f2ia64f(&ptp->f11, fpregp);
copy_from_user(fpregp, &save->_st[(4+fr8_st_map)&0x7], sizeof(struct _fpreg_ia32)); copy_from_user(fpregp, &save->_st[(4+fr8_st_map)&0x7], sizeof(struct _fpreg_ia32));
__ldfe(12, fpregp); __ldfe(12, fpregp);
copy_from_user(fpregp, &save->_st[(5+fr8_st_map)&0x7], sizeof(struct _fpreg_ia32)); copy_from_user(fpregp, &save->_st[(5+fr8_st_map)&0x7], sizeof(struct _fpreg_ia32));
...@@ -738,11 +741,11 @@ restore_sigcontext_ia32 (struct pt_regs *regs, struct sigcontext_ia32 *sc, int * ...@@ -738,11 +741,11 @@ restore_sigcontext_ia32 (struct pt_regs *regs, struct sigcontext_ia32 *sc, int *
#define COPY(ia64x, ia32x) err |= __get_user(regs->ia64x, &sc->ia32x) #define COPY(ia64x, ia32x) err |= __get_user(regs->ia64x, &sc->ia32x)
#define copyseg_gs(tmp) (regs->r16 |= (unsigned long) tmp << 48) #define copyseg_gs(tmp) (regs->r16 |= (unsigned long) (tmp) << 48)
#define copyseg_fs(tmp) (regs->r16 |= (unsigned long) tmp << 32) #define copyseg_fs(tmp) (regs->r16 |= (unsigned long) (tmp) << 32)
#define copyseg_cs(tmp) (regs->r17 |= tmp) #define copyseg_cs(tmp) (regs->r17 |= tmp)
#define copyseg_ss(tmp) (regs->r17 |= (unsigned long) tmp << 16) #define copyseg_ss(tmp) (regs->r17 |= (unsigned long) (tmp) << 16)
#define copyseg_es(tmp) (regs->r16 |= (unsigned long) tmp << 16) #define copyseg_es(tmp) (regs->r16 |= (unsigned long) (tmp) << 16)
#define copyseg_ds(tmp) (regs->r16 |= tmp) #define copyseg_ds(tmp) (regs->r16 |= tmp)
#define COPY_SEG(seg) \ #define COPY_SEG(seg) \
......
...@@ -22,7 +22,8 @@ ...@@ -22,7 +22,8 @@
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/ia32.h>
#include "ia32priv.h"
extern void die_if_kernel (char *str, struct pt_regs *regs, long err); extern void die_if_kernel (char *str, struct pt_regs *regs, long err);
...@@ -60,30 +61,26 @@ ia32_load_segment_descriptors (struct task_struct *task) ...@@ -60,30 +61,26 @@ ia32_load_segment_descriptors (struct task_struct *task)
regs->r27 = load_desc(regs->r16 >> 0); /* DSD */ regs->r27 = load_desc(regs->r16 >> 0); /* DSD */
regs->r28 = load_desc(regs->r16 >> 32); /* FSD */ regs->r28 = load_desc(regs->r16 >> 32); /* FSD */
regs->r29 = load_desc(regs->r16 >> 48); /* GSD */ regs->r29 = load_desc(regs->r16 >> 48); /* GSD */
task->thread.csd = load_desc(regs->r17 >> 0); /* CSD */ regs->ar_csd = load_desc(regs->r17 >> 0); /* CSD */
task->thread.ssd = load_desc(regs->r17 >> 16); /* SSD */ regs->ar_ssd = load_desc(regs->r17 >> 16); /* SSD */
} }
void void
ia32_save_state (struct task_struct *t) ia32_save_state (struct task_struct *t)
{ {
unsigned long eflag, fsr, fcr, fir, fdr, csd, ssd; unsigned long eflag, fsr, fcr, fir, fdr;
asm ("mov %0=ar.eflag;" asm ("mov %0=ar.eflag;"
"mov %1=ar.fsr;" "mov %1=ar.fsr;"
"mov %2=ar.fcr;" "mov %2=ar.fcr;"
"mov %3=ar.fir;" "mov %3=ar.fir;"
"mov %4=ar.fdr;" "mov %4=ar.fdr;"
"mov %5=ar.csd;" : "=r"(eflag), "=r"(fsr), "=r"(fcr), "=r"(fir), "=r"(fdr));
"mov %6=ar.ssd;"
: "=r"(eflag), "=r"(fsr), "=r"(fcr), "=r"(fir), "=r"(fdr), "=r"(csd), "=r"(ssd));
t->thread.eflag = eflag; t->thread.eflag = eflag;
t->thread.fsr = fsr; t->thread.fsr = fsr;
t->thread.fcr = fcr; t->thread.fcr = fcr;
t->thread.fir = fir; t->thread.fir = fir;
t->thread.fdr = fdr; t->thread.fdr = fdr;
t->thread.csd = csd;
t->thread.ssd = ssd;
ia64_set_kr(IA64_KR_IO_BASE, t->thread.old_iob); ia64_set_kr(IA64_KR_IO_BASE, t->thread.old_iob);
ia64_set_kr(IA64_KR_TSSD, t->thread.old_k1); ia64_set_kr(IA64_KR_TSSD, t->thread.old_k1);
} }
...@@ -91,7 +88,7 @@ ia32_save_state (struct task_struct *t) ...@@ -91,7 +88,7 @@ ia32_save_state (struct task_struct *t)
void void
ia32_load_state (struct task_struct *t) ia32_load_state (struct task_struct *t)
{ {
unsigned long eflag, fsr, fcr, fir, fdr, csd, ssd, tssd; unsigned long eflag, fsr, fcr, fir, fdr, tssd;
struct pt_regs *regs = ia64_task_regs(t); struct pt_regs *regs = ia64_task_regs(t);
int nr = get_cpu(); /* LDT and TSS depend on CPU number: */ int nr = get_cpu(); /* LDT and TSS depend on CPU number: */
...@@ -100,8 +97,6 @@ ia32_load_state (struct task_struct *t) ...@@ -100,8 +97,6 @@ ia32_load_state (struct task_struct *t)
fcr = t->thread.fcr; fcr = t->thread.fcr;
fir = t->thread.fir; fir = t->thread.fir;
fdr = t->thread.fdr; fdr = t->thread.fdr;
csd = t->thread.csd;
ssd = t->thread.ssd;
tssd = load_desc(_TSS(nr)); /* TSSD */ tssd = load_desc(_TSS(nr)); /* TSSD */
asm volatile ("mov ar.eflag=%0;" asm volatile ("mov ar.eflag=%0;"
...@@ -109,9 +104,7 @@ ia32_load_state (struct task_struct *t) ...@@ -109,9 +104,7 @@ ia32_load_state (struct task_struct *t)
"mov ar.fcr=%2;" "mov ar.fcr=%2;"
"mov ar.fir=%3;" "mov ar.fir=%3;"
"mov ar.fdr=%4;" "mov ar.fdr=%4;"
"mov ar.csd=%5;" :: "r"(eflag), "r"(fsr), "r"(fcr), "r"(fir), "r"(fdr));
"mov ar.ssd=%6;"
:: "r"(eflag), "r"(fsr), "r"(fcr), "r"(fir), "r"(fdr), "r"(csd), "r"(ssd));
current->thread.old_iob = ia64_get_kr(IA64_KR_IO_BASE); current->thread.old_iob = ia64_get_kr(IA64_KR_IO_BASE);
current->thread.old_k1 = ia64_get_kr(IA64_KR_TSSD); current->thread.old_k1 = ia64_get_kr(IA64_KR_TSSD);
ia64_set_kr(IA64_KR_IO_BASE, IA32_IOBASE); ia64_set_kr(IA64_KR_IO_BASE, IA32_IOBASE);
...@@ -181,6 +174,13 @@ ia32_bad_interrupt (unsigned long int_num, struct pt_regs *regs) ...@@ -181,6 +174,13 @@ ia32_bad_interrupt (unsigned long int_num, struct pt_regs *regs)
force_sig_info(SIGTRAP, &siginfo, current); force_sig_info(SIGTRAP, &siginfo, current);
} }
void
ia32_cpu_init (void)
{
/* initialize global ia32 state - CR0 and CR4 */
asm volatile ("mov ar.cflg = %0" :: "r" (((ulong) IA32_CR4 << 32) | IA32_CR0));
}
static int __init static int __init
ia32_init (void) ia32_init (void)
{ {
......
...@@ -12,7 +12,8 @@ ...@@ -12,7 +12,8 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <asm/ia32.h> #include "ia32priv.h"
#include <asm/ptrace.h> #include <asm/ptrace.h>
int int
......
This diff is collapsed.
...@@ -53,7 +53,8 @@ ...@@ -53,7 +53,8 @@
#include <asm/types.h> #include <asm/types.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/semaphore.h> #include <asm/semaphore.h>
#include <asm/ia32.h>
#include "ia32priv.h"
#include <net/scm.h> #include <net/scm.h>
#include <net/sock.h> #include <net/sock.h>
...@@ -206,9 +207,8 @@ int cp_compat_stat(struct kstat *stat, struct compat_stat *ubuf) ...@@ -206,9 +207,8 @@ int cp_compat_stat(struct kstat *stat, struct compat_stat *ubuf)
static int static int
get_page_prot (unsigned long addr) get_page_prot (struct vm_area_struct *vma, unsigned long addr)
{ {
struct vm_area_struct *vma = find_vma(current->mm, addr);
int prot = 0; int prot = 0;
if (!vma || vma->vm_start > addr) if (!vma || vma->vm_start > addr)
...@@ -231,14 +231,26 @@ static unsigned long ...@@ -231,14 +231,26 @@ static unsigned long
mmap_subpage (struct file *file, unsigned long start, unsigned long end, int prot, int flags, mmap_subpage (struct file *file, unsigned long start, unsigned long end, int prot, int flags,
loff_t off) loff_t off)
{ {
void *page = (void *) get_zeroed_page(GFP_KERNEL); void *page = NULL;
struct inode *inode; struct inode *inode;
unsigned long ret; unsigned long ret = 0;
int old_prot = get_page_prot(start); struct vm_area_struct *vma = find_vma(current->mm, start);
int old_prot = get_page_prot(vma, start);
DBG("mmap_subpage(file=%p,start=0x%lx,end=0x%lx,prot=%x,flags=%x,off=0x%llx)\n", DBG("mmap_subpage(file=%p,start=0x%lx,end=0x%lx,prot=%x,flags=%x,off=0x%llx)\n",
file, start, end, prot, flags, off); file, start, end, prot, flags, off);
/* Optimize the case where the old mmap and the new mmap are both anonymous */
if ((old_prot & PROT_WRITE) && (flags & MAP_ANONYMOUS) && !vma->vm_file) {
if (clear_user((void *) start, end - start)) {
ret = -EFAULT;
goto out;
}
goto skip_mmap;
}
page = (void *) get_zeroed_page(GFP_KERNEL);
if (!page) if (!page)
return -ENOMEM; return -ENOMEM;
...@@ -263,6 +275,7 @@ mmap_subpage (struct file *file, unsigned long start, unsigned long end, int pro ...@@ -263,6 +275,7 @@ mmap_subpage (struct file *file, unsigned long start, unsigned long end, int pro
copy_to_user((void *) end, page + PAGE_OFF(end), copy_to_user((void *) end, page + PAGE_OFF(end),
PAGE_SIZE - PAGE_OFF(end)); PAGE_SIZE - PAGE_OFF(end));
} }
if (!(flags & MAP_ANONYMOUS)) { if (!(flags & MAP_ANONYMOUS)) {
/* read the file contents */ /* read the file contents */
inode = file->f_dentry->d_inode; inode = file->f_dentry->d_inode;
...@@ -273,10 +286,13 @@ mmap_subpage (struct file *file, unsigned long start, unsigned long end, int pro ...@@ -273,10 +286,13 @@ mmap_subpage (struct file *file, unsigned long start, unsigned long end, int pro
goto out; goto out;
} }
} }
skip_mmap:
if (!(prot & PROT_WRITE)) if (!(prot & PROT_WRITE))
ret = sys_mprotect(PAGE_START(start), PAGE_SIZE, prot | old_prot); ret = sys_mprotect(PAGE_START(start), PAGE_SIZE, prot | old_prot);
out: out:
free_page((unsigned long) page); if (page)
free_page((unsigned long) page);
return ret; return ret;
} }
...@@ -532,11 +548,12 @@ static long ...@@ -532,11 +548,12 @@ static long
mprotect_subpage (unsigned long address, int new_prot) mprotect_subpage (unsigned long address, int new_prot)
{ {
int old_prot; int old_prot;
struct vm_area_struct *vma;
if (new_prot == PROT_NONE) if (new_prot == PROT_NONE)
return 0; /* optimize case where nothing changes... */ return 0; /* optimize case where nothing changes... */
vma = find_vma(current->mm, address);
old_prot = get_page_prot(address); old_prot = get_page_prot(vma, address);
return sys_mprotect(address, PAGE_SIZE, new_prot | old_prot); return sys_mprotect(address, PAGE_SIZE, new_prot | old_prot);
} }
...@@ -642,7 +659,6 @@ sys32_alarm (unsigned int seconds) ...@@ -642,7 +659,6 @@ sys32_alarm (unsigned int seconds)
sorts of things, like timeval and itimerval. */ sorts of things, like timeval and itimerval. */
extern struct timezone sys_tz; extern struct timezone sys_tz;
extern int do_sys_settimeofday (struct timeval *tv, struct timezone *tz);
asmlinkage long asmlinkage long
sys32_gettimeofday (struct compat_timeval *tv, struct timezone *tz) sys32_gettimeofday (struct compat_timeval *tv, struct timezone *tz)
...@@ -664,18 +680,21 @@ asmlinkage long ...@@ -664,18 +680,21 @@ asmlinkage long
sys32_settimeofday (struct compat_timeval *tv, struct timezone *tz) sys32_settimeofday (struct compat_timeval *tv, struct timezone *tz)
{ {
struct timeval ktv; struct timeval ktv;
struct timespec kts;
struct timezone ktz; struct timezone ktz;
if (tv) { if (tv) {
if (get_tv32(&ktv, tv)) if (get_tv32(&ktv, tv))
return -EFAULT; return -EFAULT;
kts.tv_sec = ktv.tv_sec;
kts.tv_nsec = ktv.tv_usec * 1000;
} }
if (tz) { if (tz) {
if (copy_from_user(&ktz, tz, sizeof(ktz))) if (copy_from_user(&ktz, tz, sizeof(ktz)))
return -EFAULT; return -EFAULT;
} }
return do_sys_settimeofday(tv ? &ktv : NULL, tz ? &ktz : NULL); return do_sys_settimeofday(tv ? &kts : NULL, tz ? &ktz : NULL);
} }
struct getdents32_callback { struct getdents32_callback {
...@@ -836,9 +855,8 @@ sys32_select (int n, fd_set *inp, fd_set *outp, fd_set *exp, struct compat_timev ...@@ -836,9 +855,8 @@ sys32_select (int n, fd_set *inp, fd_set *outp, fd_set *exp, struct compat_timev
} }
} }
size = FDS_BYTES(n);
ret = -EINVAL; ret = -EINVAL;
if (n < 0 || size < n) if (n < 0)
goto out_nofds; goto out_nofds;
if (n > current->files->max_fdset) if (n > current->files->max_fdset)
...@@ -850,6 +868,7 @@ sys32_select (int n, fd_set *inp, fd_set *outp, fd_set *exp, struct compat_timev ...@@ -850,6 +868,7 @@ sys32_select (int n, fd_set *inp, fd_set *outp, fd_set *exp, struct compat_timev
* long-words. * long-words.
*/ */
ret = -ENOMEM; ret = -ENOMEM;
size = FDS_BYTES(n);
bits = kmalloc(6 * size, GFP_KERNEL); bits = kmalloc(6 * size, GFP_KERNEL);
if (!bits) if (!bits)
goto out_nofds; goto out_nofds;
...@@ -1102,7 +1121,7 @@ struct shmid_ds32 { ...@@ -1102,7 +1121,7 @@ struct shmid_ds32 {
}; };
struct shmid64_ds32 { struct shmid64_ds32 {
struct ipc64_perm shm_perm; struct ipc64_perm32 shm_perm;
compat_size_t shm_segsz; compat_size_t shm_segsz;
compat_time_t shm_atime; compat_time_t shm_atime;
unsigned int __unused1; unsigned int __unused1;
...@@ -1320,7 +1339,6 @@ static int ...@@ -1320,7 +1339,6 @@ static int
msgctl32 (int first, int second, void *uptr) msgctl32 (int first, int second, void *uptr)
{ {
int err = -EINVAL, err2; int err = -EINVAL, err2;
struct msqid_ds m;
struct msqid64_ds m64; struct msqid64_ds m64;
struct msqid_ds32 *up32 = (struct msqid_ds32 *)uptr; struct msqid_ds32 *up32 = (struct msqid_ds32 *)uptr;
struct msqid64_ds32 *up64 = (struct msqid64_ds32 *)uptr; struct msqid64_ds32 *up64 = (struct msqid64_ds32 *)uptr;
...@@ -1336,21 +1354,21 @@ msgctl32 (int first, int second, void *uptr) ...@@ -1336,21 +1354,21 @@ msgctl32 (int first, int second, void *uptr)
case IPC_SET: case IPC_SET:
if (version == IPC_64) { if (version == IPC_64) {
err = get_user(m.msg_perm.uid, &up64->msg_perm.uid); err = get_user(m64.msg_perm.uid, &up64->msg_perm.uid);
err |= get_user(m.msg_perm.gid, &up64->msg_perm.gid); err |= get_user(m64.msg_perm.gid, &up64->msg_perm.gid);
err |= get_user(m.msg_perm.mode, &up64->msg_perm.mode); err |= get_user(m64.msg_perm.mode, &up64->msg_perm.mode);
err |= get_user(m.msg_qbytes, &up64->msg_qbytes); err |= get_user(m64.msg_qbytes, &up64->msg_qbytes);
} else { } else {
err = get_user(m.msg_perm.uid, &up32->msg_perm.uid); err = get_user(m64.msg_perm.uid, &up32->msg_perm.uid);
err |= get_user(m.msg_perm.gid, &up32->msg_perm.gid); err |= get_user(m64.msg_perm.gid, &up32->msg_perm.gid);
err |= get_user(m.msg_perm.mode, &up32->msg_perm.mode); err |= get_user(m64.msg_perm.mode, &up32->msg_perm.mode);
err |= get_user(m.msg_qbytes, &up32->msg_qbytes); err |= get_user(m64.msg_qbytes, &up32->msg_qbytes);
} }
if (err) if (err)
break; break;
old_fs = get_fs(); old_fs = get_fs();
set_fs(KERNEL_DS); set_fs(KERNEL_DS);
err = sys_msgctl(first, second, &m); err = sys_msgctl(first, second, &m64);
set_fs(old_fs); set_fs(old_fs);
break; break;
...@@ -1430,7 +1448,7 @@ static int ...@@ -1430,7 +1448,7 @@ static int
shmctl32 (int first, int second, void *uptr) shmctl32 (int first, int second, void *uptr)
{ {
int err = -EFAULT, err2; int err = -EFAULT, err2;
struct shmid_ds s;
struct shmid64_ds s64; struct shmid64_ds s64;
struct shmid_ds32 *up32 = (struct shmid_ds32 *)uptr; struct shmid_ds32 *up32 = (struct shmid_ds32 *)uptr;
struct shmid64_ds32 *up64 = (struct shmid64_ds32 *)uptr; struct shmid64_ds32 *up64 = (struct shmid64_ds32 *)uptr;
...@@ -1482,19 +1500,19 @@ shmctl32 (int first, int second, void *uptr) ...@@ -1482,19 +1500,19 @@ shmctl32 (int first, int second, void *uptr)
case IPC_SET: case IPC_SET:
if (version == IPC_64) { if (version == IPC_64) {
err = get_user(s.shm_perm.uid, &up64->shm_perm.uid); err = get_user(s64.shm_perm.uid, &up64->shm_perm.uid);
err |= get_user(s.shm_perm.gid, &up64->shm_perm.gid); err |= get_user(s64.shm_perm.gid, &up64->shm_perm.gid);
err |= get_user(s.shm_perm.mode, &up64->shm_perm.mode); err |= get_user(s64.shm_perm.mode, &up64->shm_perm.mode);
} else { } else {
err = get_user(s.shm_perm.uid, &up32->shm_perm.uid); err = get_user(s64.shm_perm.uid, &up32->shm_perm.uid);
err |= get_user(s.shm_perm.gid, &up32->shm_perm.gid); err |= get_user(s64.shm_perm.gid, &up32->shm_perm.gid);
err |= get_user(s.shm_perm.mode, &up32->shm_perm.mode); err |= get_user(s64.shm_perm.mode, &up32->shm_perm.mode);
} }
if (err) if (err)
break; break;
old_fs = get_fs(); old_fs = get_fs();
set_fs(KERNEL_DS); set_fs(KERNEL_DS);
err = sys_shmctl(first, second, &s); err = sys_shmctl(first, second, &s64);
set_fs(old_fs); set_fs(old_fs);
break; break;
...@@ -1798,12 +1816,16 @@ put_fpreg (int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct switc ...@@ -1798,12 +1816,16 @@ put_fpreg (int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct switc
ia64f2ia32f(f, &ptp->f9); ia64f2ia32f(f, &ptp->f9);
break; break;
case 2: case 2:
ia64f2ia32f(f, &ptp->f10);
break;
case 3: case 3:
ia64f2ia32f(f, &ptp->f11);
break;
case 4: case 4:
case 5: case 5:
case 6: case 6:
case 7: case 7:
ia64f2ia32f(f, &swp->f10 + (regno - 2)); ia64f2ia32f(f, &swp->f12 + (regno - 4));
break; break;
} }
copy_to_user(reg, f, sizeof(*reg)); copy_to_user(reg, f, sizeof(*reg));
...@@ -1824,12 +1846,16 @@ get_fpreg (int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct switc ...@@ -1824,12 +1846,16 @@ get_fpreg (int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct switc
copy_from_user(&ptp->f9, reg, sizeof(*reg)); copy_from_user(&ptp->f9, reg, sizeof(*reg));
break; break;
case 2: case 2:
copy_from_user(&ptp->f10, reg, sizeof(*reg));
break;
case 3: case 3:
copy_from_user(&ptp->f11, reg, sizeof(*reg));
break;
case 4: case 4:
case 5: case 5:
case 6: case 6:
case 7: case 7:
copy_from_user(&swp->f10 + (regno - 2), reg, sizeof(*reg)); copy_from_user(&swp->f12 + (regno - 4), reg, sizeof(*reg));
break; break;
} }
return; return;
...@@ -1860,7 +1886,7 @@ save_ia32_fpstate (struct task_struct *tsk, struct ia32_user_i387_struct *save) ...@@ -1860,7 +1886,7 @@ save_ia32_fpstate (struct task_struct *tsk, struct ia32_user_i387_struct *save)
ptp = ia64_task_regs(tsk); ptp = ia64_task_regs(tsk);
tos = (tsk->thread.fsr >> 11) & 7; tos = (tsk->thread.fsr >> 11) & 7;
for (i = 0; i < 8; i++) for (i = 0; i < 8; i++)
put_fpreg(i, (struct _fpreg_ia32 *)&save->st_space[4*i], ptp, swp, tos); put_fpreg(i, &save->st_space[i], ptp, swp, tos);
return 0; return 0;
} }
...@@ -1893,7 +1919,7 @@ restore_ia32_fpstate (struct task_struct *tsk, struct ia32_user_i387_struct *sav ...@@ -1893,7 +1919,7 @@ restore_ia32_fpstate (struct task_struct *tsk, struct ia32_user_i387_struct *sav
ptp = ia64_task_regs(tsk); ptp = ia64_task_regs(tsk);
tos = (tsk->thread.fsr >> 11) & 7; tos = (tsk->thread.fsr >> 11) & 7;
for (i = 0; i < 8; i++) for (i = 0; i < 8; i++)
get_fpreg(i, (struct _fpreg_ia32 *)&save->st_space[4*i], ptp, swp, tos); get_fpreg(i, &save->st_space[i], ptp, swp, tos);
return 0; return 0;
} }
......
...@@ -4,12 +4,11 @@ ...@@ -4,12 +4,11 @@
extra-y := head.o init_task.o extra-y := head.o init_task.o
obj-y := acpi.o entry.o efi.o efi_stub.o gate.o ia64_ksyms.o irq.o irq_ia64.o irq_lsapic.o \ obj-y := acpi.o entry.o efi.o efi_stub.o gate-data.o fsys.o ia64_ksyms.o irq.o irq_ia64.o \
ivt.o machvec.o pal.o perfmon.o process.o ptrace.o sal.o semaphore.o setup.o signal.o \ irq_lsapic.o ivt.o machvec.o pal.o patch.o process.o perfmon.o ptrace.o sal.o \
sys_ia64.o time.o traps.o unaligned.o unwind.o semaphore.o setup.o signal.o sys_ia64.o time.o traps.o unaligned.o unwind.o
obj-$(CONFIG_EFI_VARS) += efivars.o obj-$(CONFIG_EFI_VARS) += efivars.o
obj-$(CONFIG_FSYS) += fsys.o
obj-$(CONFIG_IA64_BRL_EMU) += brl_emu.o obj-$(CONFIG_IA64_BRL_EMU) += brl_emu.o
obj-$(CONFIG_IA64_GENERIC) += acpi-ext.o obj-$(CONFIG_IA64_GENERIC) += acpi-ext.o
obj-$(CONFIG_IA64_HP_ZX1) += acpi-ext.o obj-$(CONFIG_IA64_HP_ZX1) += acpi-ext.o
...@@ -18,3 +17,30 @@ obj-$(CONFIG_IA64_PALINFO) += palinfo.o ...@@ -18,3 +17,30 @@ obj-$(CONFIG_IA64_PALINFO) += palinfo.o
obj-$(CONFIG_IOSAPIC) += iosapic.o obj-$(CONFIG_IOSAPIC) += iosapic.o
obj-$(CONFIG_MODULES) += module.o obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_SMP) += smp.o smpboot.o obj-$(CONFIG_SMP) += smp.o smpboot.o
obj-$(CONFIG_PERFMON) += perfmon_default_smpl.o
# The gate DSO image is built using a special linker script.
targets += gate.so gate-syms.o
AFLAGS_gate.lds.o += -P -C -U$(ARCH)
arch/ia64/kernel/gate.lds.s: %.s: %.S scripts FORCE
$(call if_changed_dep,as_s_S)
quiet_cmd_gate = GATE $@
cmd_gate = $(CC) -nostdlib $(GATECFLAGS_$(@F)) -Wl,-T,$(filter-out FORCE,$^) -o $@
GATECFLAGS_gate.so = -shared -s -Wl,-soname=linux-gate.so.1
$(obj)/gate.so: $(src)/gate.lds.s $(obj)/gate.o FORCE
$(call if_changed,gate)
$(obj)/built-in.o: $(obj)/gate-syms.o
$(obj)/built-in.o: ld_flags += -R $(obj)/gate-syms.o
GATECFLAGS_gate-syms.o = -r
$(obj)/gate-syms.o: $(src)/gate.lds.s $(obj)/gate.o FORCE
$(call if_changed,gate)
# gate-data.o contains the gate DSO image as data in section .data.gate.
# We must build gate.so before we can assemble it.
# Note: kbuild does not track this dependency due to usage of .incbin
$(obj)/gate-data.o: $(obj)/gate.so
...@@ -84,7 +84,6 @@ hp_acpi_csr_space(acpi_handle obj, u64 *csr_base, u64 *csr_length) ...@@ -84,7 +84,6 @@ hp_acpi_csr_space(acpi_handle obj, u64 *csr_base, u64 *csr_length)
acpi_status status; acpi_status status;
u8 *data; u8 *data;
u32 length; u32 length;
int i;
status = acpi_find_vendor_resource(obj, &hp_ccsr_descriptor, &data, &length); status = acpi_find_vendor_resource(obj, &hp_ccsr_descriptor, &data, &length);
......
...@@ -96,6 +96,9 @@ acpi_get_sysname (void) ...@@ -96,6 +96,9 @@ acpi_get_sysname (void)
if (!strcmp(hdr->oem_id, "HP")) { if (!strcmp(hdr->oem_id, "HP")) {
return "hpzx1"; return "hpzx1";
} }
else if (!strcmp(hdr->oem_id, "SGI")) {
return "sn2";
}
return "dig"; return "dig";
#else #else
...@@ -103,8 +106,6 @@ acpi_get_sysname (void) ...@@ -103,8 +106,6 @@ acpi_get_sysname (void)
return "hpsim"; return "hpsim";
# elif defined (CONFIG_IA64_HP_ZX1) # elif defined (CONFIG_IA64_HP_ZX1)
return "hpzx1"; return "hpzx1";
# elif defined (CONFIG_IA64_SGI_SN1)
return "sn1";
# elif defined (CONFIG_IA64_SGI_SN2) # elif defined (CONFIG_IA64_SGI_SN2)
return "sn2"; return "sn2";
# elif defined (CONFIG_IA64_DIG) # elif defined (CONFIG_IA64_DIG)
...@@ -191,21 +192,19 @@ acpi_parse_lsapic (acpi_table_entry_header *header) ...@@ -191,21 +192,19 @@ acpi_parse_lsapic (acpi_table_entry_header *header)
printk(KERN_INFO "CPU %d (0x%04x)", total_cpus, (lsapic->id << 8) | lsapic->eid); printk(KERN_INFO "CPU %d (0x%04x)", total_cpus, (lsapic->id << 8) | lsapic->eid);
if (lsapic->flags.enabled) { if (!lsapic->flags.enabled)
available_cpus++; printk(" disabled");
else if (available_cpus >= NR_CPUS)
printk(" ignored (increase NR_CPUS)");
else {
printk(" enabled"); printk(" enabled");
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
smp_boot_data.cpu_phys_id[total_cpus] = (lsapic->id << 8) | lsapic->eid; smp_boot_data.cpu_phys_id[available_cpus] = (lsapic->id << 8) | lsapic->eid;
if (hard_smp_processor_id() if (hard_smp_processor_id()
== (unsigned int) smp_boot_data.cpu_phys_id[total_cpus]) == (unsigned int) smp_boot_data.cpu_phys_id[available_cpus])
printk(" (BSP)"); printk(" (BSP)");
#endif #endif
} ++available_cpus;
else {
printk(" disabled");
#ifdef CONFIG_SMP
smp_boot_data.cpu_phys_id[total_cpus] = -1;
#endif
} }
printk("\n"); printk("\n");
...@@ -694,11 +693,11 @@ acpi_boot_init (void) ...@@ -694,11 +693,11 @@ acpi_boot_init (void)
#endif #endif
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
smp_boot_data.cpu_count = available_cpus;
if (available_cpus == 0) { if (available_cpus == 0) {
printk(KERN_INFO "ACPI: Found 0 CPUS; assuming 1\n"); printk(KERN_INFO "ACPI: Found 0 CPUS; assuming 1\n");
available_cpus = 1; /* We've got at least one of these, no? */ available_cpus = 1; /* We've got at least one of these, no? */
} }
smp_boot_data.cpu_count = total_cpus;
smp_build_cpu_map(); smp_build_cpu_map();
# ifdef CONFIG_NUMA # ifdef CONFIG_NUMA
......
This diff is collapsed.
...@@ -62,7 +62,7 @@ GLOBAL_ENTRY(efi_call_phys) ...@@ -62,7 +62,7 @@ GLOBAL_ENTRY(efi_call_phys)
mov b6=r2 mov b6=r2
;; ;;
andcm r16=loc3,r16 // get psr with IT, DT, and RT bits cleared andcm r16=loc3,r16 // get psr with IT, DT, and RT bits cleared
br.call.sptk.many rp=ia64_switch_mode br.call.sptk.many rp=ia64_switch_mode_phys
.ret0: mov out4=in5 .ret0: mov out4=in5
mov out0=in1 mov out0=in1
mov out1=in2 mov out1=in2
...@@ -73,7 +73,7 @@ GLOBAL_ENTRY(efi_call_phys) ...@@ -73,7 +73,7 @@ GLOBAL_ENTRY(efi_call_phys)
br.call.sptk.many rp=b6 // call the EFI function br.call.sptk.many rp=b6 // call the EFI function
.ret1: mov ar.rsc=0 // put RSE in enforced lazy, LE mode .ret1: mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov r16=loc3 mov r16=loc3
br.call.sptk.many rp=ia64_switch_mode // return to virtual mode br.call.sptk.many rp=ia64_switch_mode_virt // return to virtual mode
.ret2: mov ar.rsc=loc4 // restore RSE configuration .ret2: mov ar.rsc=loc4 // restore RSE configuration
mov ar.pfs=loc1 mov ar.pfs=loc1
mov rp=loc0 mov rp=loc0
......
This diff is collapsed.
...@@ -4,8 +4,9 @@ ...@@ -4,8 +4,9 @@
* Preserved registers that are shared between code in ivt.S and entry.S. Be * Preserved registers that are shared between code in ivt.S and entry.S. Be
* careful not to step on these! * careful not to step on these!
*/ */
#define pKStk p2 /* will leave_kernel return to kernel-stacks? */ #define pLvSys p1 /* set 1 if leave from syscall; otherwise, set 0*/
#define pUStk p3 /* will leave_kernel return to user-stacks? */ #define pKStk p2 /* will leave_{kernel,syscall} return to kernel-stacks? */
#define pUStk p3 /* will leave_{kernel,syscall} return to user-stacks? */
#define pSys p4 /* are we processing a (synchronous) system call? */ #define pSys p4 /* are we processing a (synchronous) system call? */
#define pNonSys p5 /* complement of pSys */ #define pNonSys p5 /* complement of pSys */
...@@ -13,6 +14,7 @@ ...@@ -13,6 +14,7 @@
#define SW(f) (IA64_SWITCH_STACK_##f##_OFFSET) #define SW(f) (IA64_SWITCH_STACK_##f##_OFFSET)
#define PT_REGS_SAVES(off) \ #define PT_REGS_SAVES(off) \
.unwabi 3, 'i'; \
.unwabi @svr4, 'i'; \ .unwabi @svr4, 'i'; \
.fframe IA64_PT_REGS_SIZE+16+(off); \ .fframe IA64_PT_REGS_SIZE+16+(off); \
.spillsp rp, PT(CR_IIP)+16+(off); \ .spillsp rp, PT(CR_IIP)+16+(off); \
......
This diff is collapsed.
.section .data.gate, "ax"
.incbin "arch/ia64/kernel/gate.so"
...@@ -6,20 +6,47 @@ ...@@ -6,20 +6,47 @@
* David Mosberger-Tang <davidm@hpl.hp.com> * David Mosberger-Tang <davidm@hpl.hp.com>
*/ */
#include <linux/config.h>
#include <asm/asmmacro.h> #include <asm/asmmacro.h>
#include <asm/errno.h>
#include <asm/offsets.h> #include <asm/offsets.h>
#include <asm/sigcontext.h> #include <asm/sigcontext.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/unistd.h> #include <asm/unistd.h>
#include <asm/page.h>
.section .text.gate, "ax"
.start_gate:
/*
* We can't easily refer to symbols inside the kernel. To avoid full runtime relocation,
* complications with the linker (which likes to create PLT stubs for branches
* to targets outside the shared object) and to avoid multi-phase kernel builds, we
* simply create minimalistic "patch lists" in special ELF sections.
*/
.section ".data.patch.fsyscall_table", "a"
.previous
#define LOAD_FSYSCALL_TABLE(reg) \
[1:] movl reg=0; \
.xdata4 ".data.patch.fsyscall_table", 1b-.
#ifdef CONFIG_FSYS .section ".data.patch.brl_fsys_bubble_down", "a"
.previous
#define BRL_COND_FSYS_BUBBLE_DOWN(pr) \
[1:](pr)brl.cond.sptk 0; \
.xdata4 ".data.patch.brl_fsys_bubble_down", 1b-.
#include <asm/errno.h> GLOBAL_ENTRY(__kernel_syscall_via_break)
.prologue
.altrp b6
.body
/*
* Note: for (fast) syscall restart to work, the break instruction must be
* the first one in the bundle addressed by syscall_via_break.
*/
{ .mib
break 0x100000
nop.i 0
br.ret.sptk.many b6
}
END(__kernel_syscall_via_break)
/* /*
* On entry: * On entry:
...@@ -34,7 +61,8 @@ ...@@ -34,7 +61,8 @@
* all other "scratch" registers: undefined * all other "scratch" registers: undefined
* all "preserved" registers: same as on entry * all "preserved" registers: same as on entry
*/ */
GLOBAL_ENTRY(syscall_via_epc)
GLOBAL_ENTRY(__kernel_syscall_via_epc)
.prologue .prologue
.altrp b6 .altrp b6
.body .body
...@@ -49,52 +77,50 @@ GLOBAL_ENTRY(syscall_via_epc) ...@@ -49,52 +77,50 @@ GLOBAL_ENTRY(syscall_via_epc)
epc epc
} }
;; ;;
rsm psr.be rsm psr.be // note: on McKinley "rsm psr.be/srlz.d" is slightly faster than "rum psr.be"
movl r18=fsyscall_table LOAD_FSYSCALL_TABLE(r14)
mov r16=IA64_KR(CURRENT) mov r16=IA64_KR(CURRENT) // 12 cycle read latency
mov r19=255 mov r19=NR_syscalls-1
;; ;;
shladd r18=r17,3,r18 shladd r18=r17,3,r14
cmp.geu p6,p0=r19,r17 // (syscall > 0 && syscall <= 1024+255)?
srlz.d
cmp.ne p8,p0=r0,r0 // p8 <- FALSE
/* Note: if r17 is a NaT, p6 will be set to zero. */
cmp.geu p6,p7=r19,r17 // (syscall > 0 && syscall < 1024+NR_syscalls)?
;; ;;
srlz.d // ensure little-endian byteorder is in effect
(p6) ld8 r18=[r18] (p6) ld8 r18=[r18]
mov r29=psr // read psr (12 cyc load latency)
add r14=-8,r14 // r14 <- addr of fsys_bubble_down entry
;; ;;
(p6) mov b7=r18 (p6) mov b7=r18
(p6) tbit.z p8,p0=r18,0
(p8) br.dptk.many b7
mov r27=ar.rsc
mov r21=ar.fpsr
mov r26=ar.pfs
/*
* brl.cond doesn't work as intended because the linker would convert this branch
* into a branch to a PLT. Perhaps there will be a way to avoid this with some
* future version of the linker. In the meantime, we just use an indirect branch
* instead.
*/
#ifdef CONFIG_ITANIUM
(p6) ld8 r14=[r14] // r14 <- fsys_bubble_down
;;
(p6) mov b7=r14
(p6) br.sptk.many b7 (p6) br.sptk.many b7
#else
BRL_COND_FSYS_BUBBLE_DOWN(p6)
#endif
mov r10=-1 mov r10=-1
mov r8=ENOSYS mov r8=ENOSYS
MCKINLEY_E9_WORKAROUND MCKINLEY_E9_WORKAROUND
br.ret.sptk.many b6 br.ret.sptk.many b6
END(syscall_via_epc) END(__kernel_syscall_via_epc)
GLOBAL_ENTRY(syscall_via_break)
.prologue
.altrp b6
.body
break 0x100000
br.ret.sptk.many b6
END(syscall_via_break)
GLOBAL_ENTRY(fsys_fallback_syscall)
/*
* It would be better/fsyser to do the SAVE_MIN magic directly here, but for now
* we simply fall back on doing a system-call via break. Good enough
* to get started. (Note: we have to do this through the gate page again, since
* the br.ret will switch us back to user-level privilege.)
*
* XXX Move this back to fsys.S after changing it over to avoid break 0x100000.
*/
movl r2=(syscall_via_break - .start_gate) + GATE_ADDR
;;
MCKINLEY_E9_WORKAROUND
mov b7=r2
br.ret.sptk.many b7
END(fsys_fallback_syscall)
#endif /* CONFIG_FSYS */
# define ARG0_OFF (16 + IA64_SIGFRAME_ARG0_OFFSET) # define ARG0_OFF (16 + IA64_SIGFRAME_ARG0_OFFSET)
# define ARG1_OFF (16 + IA64_SIGFRAME_ARG1_OFFSET) # define ARG1_OFF (16 + IA64_SIGFRAME_ARG1_OFFSET)
...@@ -145,7 +171,8 @@ END(fsys_fallback_syscall) ...@@ -145,7 +171,8 @@ END(fsys_fallback_syscall)
*/ */
#define SIGTRAMP_SAVES \ #define SIGTRAMP_SAVES \
.unwabi @svr4, 's'; /* mark this as a sigtramp handler (saves scratch regs) */ \ .unwabi 3, 's'; /* mark this as a sigtramp handler (saves scratch regs) */ \
.unwabi @svr4, 's'; /* backwards compatibility with old unwinders (remove in v2.7) */ \
.savesp ar.unat, UNAT_OFF+SIGCONTEXT_OFF; \ .savesp ar.unat, UNAT_OFF+SIGCONTEXT_OFF; \
.savesp ar.fpsr, FPSR_OFF+SIGCONTEXT_OFF; \ .savesp ar.fpsr, FPSR_OFF+SIGCONTEXT_OFF; \
.savesp pr, PR_OFF+SIGCONTEXT_OFF; \ .savesp pr, PR_OFF+SIGCONTEXT_OFF; \
...@@ -153,7 +180,7 @@ END(fsys_fallback_syscall) ...@@ -153,7 +180,7 @@ END(fsys_fallback_syscall)
.savesp ar.pfs, CFM_OFF+SIGCONTEXT_OFF; \ .savesp ar.pfs, CFM_OFF+SIGCONTEXT_OFF; \
.vframesp SP_OFF+SIGCONTEXT_OFF .vframesp SP_OFF+SIGCONTEXT_OFF
GLOBAL_ENTRY(ia64_sigtramp) GLOBAL_ENTRY(__kernel_sigtramp)
// describe the state that is active when we get here: // describe the state that is active when we get here:
.prologue .prologue
SIGTRAMP_SAVES SIGTRAMP_SAVES
...@@ -335,4 +362,4 @@ restore_rbs: ...@@ -335,4 +362,4 @@ restore_rbs:
mov ar.rsc=0xf // (will be restored later on from sc_ar_rsc) mov ar.rsc=0xf // (will be restored later on from sc_ar_rsc)
// invala not necessary as that will happen when returning to user-mode // invala not necessary as that will happen when returning to user-mode
br.cond.sptk back_from_restore_rbs br.cond.sptk back_from_restore_rbs
END(ia64_sigtramp) END(__kernel_sigtramp)
/*
* Linker script for gate DSO. The gate pages are an ELF shared object prelinked to its
* virtual address, with only one read-only segment and one execute-only segment (both fit
* in one page). This script controls its layout.
*/
#include <linux/config.h>
#include <asm/system.h>
SECTIONS
{
. = GATE_ADDR + SIZEOF_HEADERS;
.hash : { *(.hash) } :readable
.dynsym : { *(.dynsym) }
.dynstr : { *(.dynstr) }
.gnu.version : { *(.gnu.version) }
.gnu.version_d : { *(.gnu.version_d) }
.gnu.version_r : { *(.gnu.version_r) }
.dynamic : { *(.dynamic) } :readable :dynamic
/*
* This linker script is used both with -r and with -shared. For the layouts to match,
* we need to skip more than enough space for the dynamic symbol table et al. If this
* amount is insufficient, ld -shared will barf. Just increase it here.
*/
. = GATE_ADDR + 0x500;
.data.patch : {
__start_gate_mckinley_e9_patchlist = .;
*(.data.patch.mckinley_e9)
__end_gate_mckinley_e9_patchlist = .;
__start_gate_vtop_patchlist = .;
*(.data.patch.vtop)
__end_gate_vtop_patchlist = .;
__start_gate_fsyscall_patchlist = .;
*(.data.patch.fsyscall_table)
__end_gate_fsyscall_patchlist = .;
__start_gate_brl_fsys_bubble_down_patchlist = .;
*(.data.patch.brl_fsys_bubble_down)
__end_gate_brl_fsys_bubble_down_patchlist = .;
} :readable
.IA_64.unwind_info : { *(.IA_64.unwind_info*) }
.IA_64.unwind : { *(.IA_64.unwind*) } :readable :unwind
#ifdef HAVE_BUGGY_SEGREL
.text (GATE_ADDR + PAGE_SIZE) : { *(.text) *(.text.*) } :readable
#else
. = ALIGN (PERCPU_PAGE_SIZE) + (. & (PERCPU_PAGE_SIZE - 1));
.text : { *(.text) *(.text.*) } :epc
#endif
/DISCARD/ : {
*(.got.plt) *(.got)
*(.data .data.* .gnu.linkonce.d.*)
*(.dynbss)
*(.bss .bss.* .gnu.linkonce.b.*)
*(__ex_table)
}
}
/*
* We must supply the ELF program headers explicitly to get just one
* PT_LOAD segment, and set the flags explicitly to make segments read-only.
*/
PHDRS
{
readable PT_LOAD FILEHDR PHDRS FLAGS(4); /* PF_R */
#ifndef HAVE_BUGGY_SEGREL
epc PT_LOAD FILEHDR PHDRS FLAGS(1); /* PF_X */
#endif
dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
unwind 0x70000001; /* PT_IA_64_UNWIND, but ld doesn't match the name */
}
/*
* This controls what symbols we export from the DSO.
*/
VERSION
{
LINUX_2.5 {
global:
__kernel_syscall_via_break;
__kernel_syscall_via_epc;
__kernel_sigtramp;
local: *;
};
}
/* The ELF entry point can be used to set the AT_SYSINFO value. */
ENTRY(__kernel_syscall_via_epc)
...@@ -60,22 +60,42 @@ start_ap: ...@@ -60,22 +60,42 @@ start_ap:
mov r4=r0 mov r4=r0
.body .body
/*
* Initialize the region register for region 7 and install a translation register
* that maps the kernel's text and data:
*/
rsm psr.i | psr.ic rsm psr.i | psr.ic
mov r16=((ia64_rid(IA64_REGION_ID_KERNEL, PAGE_OFFSET) << 8) | (IA64_GRANULE_SHIFT << 2))
;; ;;
srlz.i srlz.i
;;
/*
* Initialize kernel region registers:
* rr[5]: VHPT enabled, page size = PAGE_SHIFT
* rr[6]: VHPT disabled, page size = IA64_GRANULE_SHIFT
* rr[5]: VHPT disabled, page size = IA64_GRANULE_SHIFT
*/
mov r16=((ia64_rid(IA64_REGION_ID_KERNEL, (5<<61)) << 8) | (PAGE_SHIFT << 2) | 1)
movl r17=(5<<61)
mov r18=((ia64_rid(IA64_REGION_ID_KERNEL, (6<<61)) << 8) | (IA64_GRANULE_SHIFT << 2))
movl r19=(6<<61)
mov r20=((ia64_rid(IA64_REGION_ID_KERNEL, (7<<61)) << 8) | (IA64_GRANULE_SHIFT << 2))
movl r21=(7<<61)
;;
mov rr[r17]=r16
mov rr[r19]=r18
mov rr[r21]=r20
;;
/*
* Now pin mappings into the TLB for kernel text and data
*/
mov r18=KERNEL_TR_PAGE_SHIFT<<2 mov r18=KERNEL_TR_PAGE_SHIFT<<2
movl r17=KERNEL_START movl r17=KERNEL_START
;; ;;
mov rr[r17]=r16
mov cr.itir=r18 mov cr.itir=r18
mov cr.ifa=r17 mov cr.ifa=r17
mov r16=IA64_TR_KERNEL mov r16=IA64_TR_KERNEL
movl r18=((1 << KERNEL_TR_PAGE_SHIFT) | PAGE_KERNEL) mov r3=ip
movl r18=PAGE_KERNEL
;;
dep r2=0,r3,0,KERNEL_TR_PAGE_SHIFT
;;
or r18=r2,r18
;; ;;
srlz.i srlz.i
;; ;;
...@@ -113,16 +133,6 @@ start_ap: ...@@ -113,16 +133,6 @@ start_ap:
mov ar.fpsr=r2 mov ar.fpsr=r2
;; ;;
#ifdef CONFIG_IA64_EARLY_PRINTK
mov r3=(6<<8) | (IA64_GRANULE_SHIFT<<2)
movl r2=6<<61
;;
mov rr[r2]=r3
;;
srlz.i
;;
#endif
#define isAP p2 // are we an Application Processor? #define isAP p2 // are we an Application Processor?
#define isBP p3 // are we the Bootstrap Processor? #define isBP p3 // are we the Bootstrap Processor?
...@@ -143,12 +153,36 @@ start_ap: ...@@ -143,12 +153,36 @@ start_ap:
movl r2=init_thread_union movl r2=init_thread_union
cmp.eq isBP,isAP=r0,r0 cmp.eq isBP,isAP=r0,r0
#endif #endif
mov r16=KERNEL_TR_PAGE_NUM
;; ;;
tpa r3=r2 // r3 == phys addr of task struct
// load mapping for stack (virtaddr in r2, physaddr in r3)
rsm psr.ic
movl r17=PAGE_KERNEL
;;
srlz.d
dep r18=0,r3,0,12
;;
or r18=r17,r18
dep r2=-1,r3,61,3 // IMVA of task
;;
mov r17=rr[r2]
shr.u r16=r3,IA64_GRANULE_SHIFT
;;
dep r17=0,r17,8,24
;;
mov cr.itir=r17
mov cr.ifa=r2
mov r19=IA64_TR_CURRENT_STACK
;;
itr.d dtr[r19]=r18
;;
ssm psr.ic
srlz.d
;;
// load the "current" pointer (r13) and ar.k6 with the current task // load the "current" pointer (r13) and ar.k6 with the current task
mov IA64_KR(CURRENT)=r2 // virtual address mov IA64_KR(CURRENT)=r2 // virtual address
// initialize k4 to a safe value (64-128MB is mapped by TR_KERNEL)
mov IA64_KR(CURRENT_STACK)=r16 mov IA64_KR(CURRENT_STACK)=r16
mov r13=r2 mov r13=r2
/* /*
...@@ -665,14 +699,14 @@ GLOBAL_ENTRY(__ia64_init_fpu) ...@@ -665,14 +699,14 @@ GLOBAL_ENTRY(__ia64_init_fpu)
END(__ia64_init_fpu) END(__ia64_init_fpu)
/* /*
* Switch execution mode from virtual to physical or vice versa. * Switch execution mode from virtual to physical
* *
* Inputs: * Inputs:
* r16 = new psr to establish * r16 = new psr to establish
* *
* Note: RSE must already be in enforced lazy mode * Note: RSE must already be in enforced lazy mode
*/ */
GLOBAL_ENTRY(ia64_switch_mode) GLOBAL_ENTRY(ia64_switch_mode_phys)
{ {
alloc r2=ar.pfs,0,0,0,0 alloc r2=ar.pfs,0,0,0,0
rsm psr.i | psr.ic // disable interrupts and interrupt collection rsm psr.i | psr.ic // disable interrupts and interrupt collection
...@@ -682,35 +716,86 @@ GLOBAL_ENTRY(ia64_switch_mode) ...@@ -682,35 +716,86 @@ GLOBAL_ENTRY(ia64_switch_mode)
{ {
flushrs // must be first insn in group flushrs // must be first insn in group
srlz.i srlz.i
shr.u r19=r15,61 // r19 <- top 3 bits of current IP
} }
;; ;;
mov cr.ipsr=r16 // set new PSR mov cr.ipsr=r16 // set new PSR
add r3=1f-ia64_switch_mode,r15 add r3=1f-ia64_switch_mode_phys,r15
xor r15=0x7,r19 // flip the region bits
mov r17=ar.bsp mov r17=ar.bsp
mov r14=rp // get return address into a general register mov r14=rp // get return address into a general register
;;
// switch RSE backing store: // going to physical mode, use tpa to translate virt->phys
tpa r17=r17
tpa r3=r3
tpa sp=sp
tpa r14=r14
;; ;;
dep r17=r15,r17,61,3 // make ar.bsp physical or virtual
mov r18=ar.rnat // save ar.rnat mov r18=ar.rnat // save ar.rnat
;;
mov ar.bspstore=r17 // this steps on ar.rnat mov ar.bspstore=r17 // this steps on ar.rnat
dep r3=r15,r3,61,3 // make rfi return address physical or virtual mov cr.iip=r3
mov cr.ifs=r0
;; ;;
mov ar.rnat=r18 // restore ar.rnat
rfi // must be last insn in group
;;
1: mov rp=r14
br.ret.sptk.many rp
END(ia64_switch_mode_phys)
/*
* Switch execution mode from physical to virtual
*
* Inputs:
* r16 = new psr to establish
*
* Note: RSE must already be in enforced lazy mode
*/
GLOBAL_ENTRY(ia64_switch_mode_virt)
{
alloc r2=ar.pfs,0,0,0,0
rsm psr.i | psr.ic // disable interrupts and interrupt collection
mov r15=ip
}
;;
{
flushrs // must be first insn in group
srlz.i
}
;;
mov cr.ipsr=r16 // set new PSR
add r3=1f-ia64_switch_mode_virt,r15
mov r17=ar.bsp
mov r14=rp // get return address into a general register
;;
// going to virtual
// - for code addresses, set upper bits of addr to KERNEL_START
// - for stack addresses, set upper 3 bits to 0xe.... Dont change any of the
// lower bits since we want it to stay identity mapped
movl r18=KERNEL_START
dep r3=0,r3,KERNEL_TR_PAGE_SHIFT,64-KERNEL_TR_PAGE_SHIFT
dep r14=0,r14,KERNEL_TR_PAGE_SHIFT,64-KERNEL_TR_PAGE_SHIFT
dep r17=-1,r17,61,3
dep sp=-1,sp,61,3
;;
or r3=r3,r18
or r14=r14,r18
;;
mov r18=ar.rnat // save ar.rnat
mov ar.bspstore=r17 // this steps on ar.rnat
mov cr.iip=r3 mov cr.iip=r3
mov cr.ifs=r0 mov cr.ifs=r0
dep sp=r15,sp,61,3 // make stack pointer physical or virtual
;; ;;
mov ar.rnat=r18 // restore ar.rnat mov ar.rnat=r18 // restore ar.rnat
dep r14=r15,r14,61,3 // make function return address physical or virtual
rfi // must be last insn in group rfi // must be last insn in group
;; ;;
1: mov rp=r14 1: mov rp=r14
br.ret.sptk.many rp br.ret.sptk.many rp
END(ia64_switch_mode) END(ia64_switch_mode_virt)
#ifdef CONFIG_IA64_BRL_EMU #ifdef CONFIG_IA64_BRL_EMU
...@@ -753,7 +838,7 @@ SET_REG(b5); ...@@ -753,7 +838,7 @@ SET_REG(b5);
* r29 - available for use. * r29 - available for use.
* r30 - available for use. * r30 - available for use.
* r31 - address of lock, available for use. * r31 - address of lock, available for use.
* b7 - return address * b6 - return address
* p14 - available for use. * p14 - available for use.
* *
* If you patch this code to use more registers, do not forget to update * If you patch this code to use more registers, do not forget to update
......
...@@ -65,6 +65,9 @@ EXPORT_SYMBOL(ia64_pfn_valid); ...@@ -65,6 +65,9 @@ EXPORT_SYMBOL(ia64_pfn_valid);
#include <asm/processor.h> #include <asm/processor.h>
EXPORT_SYMBOL(cpu_info__per_cpu); EXPORT_SYMBOL(cpu_info__per_cpu);
#ifdef CONFIG_SMP
EXPORT_SYMBOL(__per_cpu_offset);
#endif
EXPORT_SYMBOL(kernel_thread); EXPORT_SYMBOL(kernel_thread);
#include <asm/system.h> #include <asm/system.h>
...@@ -88,6 +91,7 @@ EXPORT_SYMBOL(synchronize_irq); ...@@ -88,6 +91,7 @@ EXPORT_SYMBOL(synchronize_irq);
EXPORT_SYMBOL(smp_call_function); EXPORT_SYMBOL(smp_call_function);
EXPORT_SYMBOL(smp_call_function_single); EXPORT_SYMBOL(smp_call_function_single);
EXPORT_SYMBOL(cpu_online_map); EXPORT_SYMBOL(cpu_online_map);
EXPORT_SYMBOL(phys_cpu_present_map);
EXPORT_SYMBOL(ia64_cpu_to_sapicid); EXPORT_SYMBOL(ia64_cpu_to_sapicid);
#else /* !CONFIG_SMP */ #else /* !CONFIG_SMP */
...@@ -124,6 +128,18 @@ EXPORT_SYMBOL_NOVERS(__udivdi3); ...@@ -124,6 +128,18 @@ EXPORT_SYMBOL_NOVERS(__udivdi3);
EXPORT_SYMBOL_NOVERS(__moddi3); EXPORT_SYMBOL_NOVERS(__moddi3);
EXPORT_SYMBOL_NOVERS(__umoddi3); EXPORT_SYMBOL_NOVERS(__umoddi3);
#if defined(CONFIG_MD_RAID5) || defined(CONFIG_MD_RAID5_MODULE)
extern void xor_ia64_2(void);
extern void xor_ia64_3(void);
extern void xor_ia64_4(void);
extern void xor_ia64_5(void);
EXPORT_SYMBOL_NOVERS(xor_ia64_2);
EXPORT_SYMBOL_NOVERS(xor_ia64_3);
EXPORT_SYMBOL_NOVERS(xor_ia64_4);
EXPORT_SYMBOL_NOVERS(xor_ia64_5);
#endif
extern unsigned long ia64_iobase; extern unsigned long ia64_iobase;
EXPORT_SYMBOL(ia64_iobase); EXPORT_SYMBOL(ia64_iobase);
...@@ -147,10 +163,15 @@ EXPORT_SYMBOL(efi_dir); ...@@ -147,10 +163,15 @@ EXPORT_SYMBOL(efi_dir);
EXPORT_SYMBOL(ia64_mv); EXPORT_SYMBOL(ia64_mv);
#endif #endif
EXPORT_SYMBOL(machvec_noop); EXPORT_SYMBOL(machvec_noop);
EXPORT_SYMBOL(machvec_memory_fence);
EXPORT_SYMBOL(zero_page_memmap_ptr);
#ifdef CONFIG_PERFMON #ifdef CONFIG_PERFMON
#include <asm/perfmon.h> #include <asm/perfmon.h>
EXPORT_SYMBOL(pfm_install_alternate_syswide_subsystem); EXPORT_SYMBOL(pfm_register_buffer_fmt);
EXPORT_SYMBOL(pfm_remove_alternate_syswide_subsystem); EXPORT_SYMBOL(pfm_unregister_buffer_fmt);
EXPORT_SYMBOL(pfm_mod_fast_read_pmds);
EXPORT_SYMBOL(pfm_mod_read_pmds);
EXPORT_SYMBOL(pfm_mod_write_pmcs);
#endif #endif
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
...@@ -169,10 +190,25 @@ EXPORT_SYMBOL(unw_access_fr); ...@@ -169,10 +190,25 @@ EXPORT_SYMBOL(unw_access_fr);
EXPORT_SYMBOL(unw_access_ar); EXPORT_SYMBOL(unw_access_ar);
EXPORT_SYMBOL(unw_access_pr); EXPORT_SYMBOL(unw_access_pr);
#if __GNUC__ < 3 || (__GNUC__ == 3 && __GNUC_MINOR__ < 4) #ifdef CONFIG_SMP
extern void ia64_spinlock_contention_pre3_4 (void); # if __GNUC__ < 3 || (__GNUC__ == 3 && __GNUC_MINOR__ < 4)
/*
* This is not a normal routine and we don't want a function descriptor for it, so we use
* a fake declaration here.
*/
extern char ia64_spinlock_contention_pre3_4;
EXPORT_SYMBOL(ia64_spinlock_contention_pre3_4); EXPORT_SYMBOL(ia64_spinlock_contention_pre3_4);
#else # else
extern void ia64_spinlock_contention (void); /*
* This is not a normal routine and we don't want a function descriptor for it, so we use
* a fake declaration here.
*/
extern char ia64_spinlock_contention;
EXPORT_SYMBOL(ia64_spinlock_contention); EXPORT_SYMBOL(ia64_spinlock_contention);
# endif
#endif #endif
EXPORT_SYMBOL(ia64_max_iommu_merge_mask);
#include <linux/pm.h>
EXPORT_SYMBOL(pm_idle);
...@@ -36,7 +36,7 @@ union init_thread { ...@@ -36,7 +36,7 @@ union init_thread {
unsigned long stack[KERNEL_STACK_SIZE/sizeof (unsigned long)]; unsigned long stack[KERNEL_STACK_SIZE/sizeof (unsigned long)];
} init_thread_union __attribute__((section(".data.init_task"))) = {{ } init_thread_union __attribute__((section(".data.init_task"))) = {{
.task = INIT_TASK(init_thread_union.s.task), .task = INIT_TASK(init_thread_union.s.task),
.thread_info = INIT_THREAD_INFO(init_thread_union.s.thread_info) .thread_info = INIT_THREAD_INFO(init_thread_union.s.task)
}}; }};
asm (".global init_task; init_task = init_thread_union"); asm (".global init_task; init_task = init_thread_union");
...@@ -65,7 +65,7 @@ ...@@ -65,7 +65,7 @@
/* /*
* Controller mappings for all interrupt sources: * Controller mappings for all interrupt sources:
*/ */
irq_desc_t irq_desc[NR_IRQS] __cacheline_aligned = { irq_desc_t _irq_desc[NR_IRQS] __cacheline_aligned = {
[0 ... NR_IRQS-1] = { [0 ... NR_IRQS-1] = {
.status = IRQ_DISABLED, .status = IRQ_DISABLED,
.handler = &no_irq_type, .handler = &no_irq_type,
...@@ -235,7 +235,6 @@ int handle_IRQ_event(unsigned int irq, ...@@ -235,7 +235,6 @@ int handle_IRQ_event(unsigned int irq,
{ {
int status = 1; /* Force the "do bottom halves" bit */ int status = 1; /* Force the "do bottom halves" bit */
int retval = 0; int retval = 0;
struct irqaction *first_action = action;
if (!(action->flags & SA_INTERRUPT)) if (!(action->flags & SA_INTERRUPT))
local_irq_enable(); local_irq_enable();
...@@ -248,30 +247,88 @@ int handle_IRQ_event(unsigned int irq, ...@@ -248,30 +247,88 @@ int handle_IRQ_event(unsigned int irq,
if (status & SA_SAMPLE_RANDOM) if (status & SA_SAMPLE_RANDOM)
add_interrupt_randomness(irq); add_interrupt_randomness(irq);
local_irq_disable(); local_irq_disable();
if (retval != 1) { return retval;
static int count = 100; }
if (count) {
count--; static void __report_bad_irq(int irq, irq_desc_t *desc, irqreturn_t action_ret)
if (retval) { {
printk("irq event %d: bogus retval mask %x\n", struct irqaction *action;
irq, retval);
} else { if (action_ret != IRQ_HANDLED && action_ret != IRQ_NONE) {
printk("irq %d: nobody cared!\n", irq); printk(KERN_ERR "irq event %d: bogus return value %x\n",
} irq, action_ret);
dump_stack(); } else {
printk("handlers:\n"); printk(KERN_ERR "irq %d: nobody cared!\n", irq);
action = first_action; }
do { dump_stack();
printk("[<%p>]", action->handler); printk(KERN_ERR "handlers:\n");
print_symbol(" (%s)", action = desc->action;
(unsigned long)action->handler); do {
printk("\n"); printk(KERN_ERR "[<%p>]", action->handler);
action = action->next; print_symbol(" (%s)",
} while (action); (unsigned long)action->handler);
} printk("\n");
action = action->next;
} while (action);
}
static void report_bad_irq(int irq, irq_desc_t *desc, irqreturn_t action_ret)
{
static int count = 100;
if (count) {
count--;
__report_bad_irq(irq, desc, action_ret);
}
}
static int noirqdebug;
static int __init noirqdebug_setup(char *str)
{
noirqdebug = 1;
printk("IRQ lockup detection disabled\n");
return 1;
}
__setup("noirqdebug", noirqdebug_setup);
/*
* If 99,900 of the previous 100,000 interrupts have not been handled then
* assume that the IRQ is stuck in some manner. Drop a diagnostic and try to
* turn the IRQ off.
*
* (The other 100-of-100,000 interrupts may have been a correctly-functioning
* device sharing an IRQ with the failing one)
*
* Called under desc->lock
*/
static void note_interrupt(int irq, irq_desc_t *desc, irqreturn_t action_ret)
{
if (action_ret != IRQ_HANDLED) {
desc->irqs_unhandled++;
if (action_ret != IRQ_NONE)
report_bad_irq(irq, desc, action_ret);
} }
return status; desc->irq_count++;
if (desc->irq_count < 100000)
return;
desc->irq_count = 0;
if (desc->irqs_unhandled > 99900) {
/*
* The interrupt is stuck
*/
__report_bad_irq(irq, desc, action_ret);
/*
* Now kill the IRQ
*/
printk(KERN_EMERG "Disabling IRQ #%d\n", irq);
desc->status |= IRQ_DISABLED;
desc->handler->disable(irq);
}
desc->irqs_unhandled = 0;
} }
/* /*
...@@ -380,21 +437,24 @@ unsigned int do_IRQ(unsigned long irq, struct pt_regs *regs) ...@@ -380,21 +437,24 @@ unsigned int do_IRQ(unsigned long irq, struct pt_regs *regs)
* 0 return value means that this irq is already being * 0 return value means that this irq is already being
* handled by some other CPU. (or is disabled) * handled by some other CPU. (or is disabled)
*/ */
int cpu;
irq_desc_t *desc = irq_desc(irq); irq_desc_t *desc = irq_desc(irq);
struct irqaction * action; struct irqaction * action;
irqreturn_t action_ret;
unsigned int status; unsigned int status;
int cpu;
irq_enter(); irq_enter();
cpu = smp_processor_id(); cpu = smp_processor_id(); /* for CONFIG_PREEMPT, this must come after irq_enter()! */
kstat_cpu(cpu).irqs[irq]++; kstat_cpu(cpu).irqs[irq]++;
if (desc->status & IRQ_PER_CPU) { if (desc->status & IRQ_PER_CPU) {
/* no locking required for CPU-local interrupts: */ /* no locking required for CPU-local interrupts: */
desc->handler->ack(irq); desc->handler->ack(irq);
handle_IRQ_event(irq, regs, desc->action); action_ret = handle_IRQ_event(irq, regs, desc->action);
desc->handler->end(irq); desc->handler->end(irq);
if (!noirqdebug)
note_interrupt(irq, desc, action_ret);
} else { } else {
spin_lock(&desc->lock); spin_lock(&desc->lock);
desc->handler->ack(irq); desc->handler->ack(irq);
...@@ -438,9 +498,10 @@ unsigned int do_IRQ(unsigned long irq, struct pt_regs *regs) ...@@ -438,9 +498,10 @@ unsigned int do_IRQ(unsigned long irq, struct pt_regs *regs)
*/ */
for (;;) { for (;;) {
spin_unlock(&desc->lock); spin_unlock(&desc->lock);
handle_IRQ_event(irq, regs, action); action_ret = handle_IRQ_event(irq, regs, action);
spin_lock(&desc->lock); spin_lock(&desc->lock);
if (!noirqdebug)
note_interrupt(irq, desc, action_ret);
if (!(desc->status & IRQ_PENDING)) if (!(desc->status & IRQ_PENDING))
break; break;
desc->status &= ~IRQ_PENDING; desc->status &= ~IRQ_PENDING;
......
This diff is collapsed.
...@@ -322,7 +322,7 @@ fetch_min_state (pal_min_state_area_t *ms, struct pt_regs *pt, struct switch_sta ...@@ -322,7 +322,7 @@ fetch_min_state (pal_min_state_area_t *ms, struct pt_regs *pt, struct switch_sta
} }
void void
init_handler_platform (sal_log_processor_info_t *proc_ptr, init_handler_platform (pal_min_state_area_t *ms,
struct pt_regs *pt, struct switch_stack *sw) struct pt_regs *pt, struct switch_stack *sw)
{ {
struct unw_frame_info info; struct unw_frame_info info;
...@@ -337,15 +337,18 @@ init_handler_platform (sal_log_processor_info_t *proc_ptr, ...@@ -337,15 +337,18 @@ init_handler_platform (sal_log_processor_info_t *proc_ptr,
*/ */
printk("Delaying for 5 seconds...\n"); printk("Delaying for 5 seconds...\n");
udelay(5*1000000); udelay(5*1000000);
show_min_state(&SAL_LPI_PSI_INFO(proc_ptr)->min_state_area); show_min_state(ms);
printk("Backtrace of current task (pid %d, %s)\n", current->pid, current->comm); printk("Backtrace of current task (pid %d, %s)\n", current->pid, current->comm);
fetch_min_state(&SAL_LPI_PSI_INFO(proc_ptr)->min_state_area, pt, sw); fetch_min_state(ms, pt, sw);
unw_init_from_interruption(&info, current, pt, sw); unw_init_from_interruption(&info, current, pt, sw);
ia64_do_show_stack(&info, NULL); ia64_do_show_stack(&info, NULL);
#ifdef CONFIG_SMP
/* read_trylock() would be handy... */
if (!tasklist_lock.write_lock) if (!tasklist_lock.write_lock)
read_lock(&tasklist_lock); read_lock(&tasklist_lock);
#endif
{ {
struct task_struct *g, *t; struct task_struct *g, *t;
do_each_thread (g, t) { do_each_thread (g, t) {
...@@ -353,11 +356,13 @@ init_handler_platform (sal_log_processor_info_t *proc_ptr, ...@@ -353,11 +356,13 @@ init_handler_platform (sal_log_processor_info_t *proc_ptr,
continue; continue;
printk("\nBacktrace of pid %d (%s)\n", t->pid, t->comm); printk("\nBacktrace of pid %d (%s)\n", t->pid, t->comm);
show_stack(t); show_stack(t, NULL);
} while_each_thread (g, t); } while_each_thread (g, t);
} }
#ifdef CONFIG_SMP
if (!tasklist_lock.write_lock) if (!tasklist_lock.write_lock)
read_unlock(&tasklist_lock); read_unlock(&tasklist_lock);
#endif
printk("\nINIT dump complete. Please reboot now.\n"); printk("\nINIT dump complete. Please reboot now.\n");
while (1); /* hang city if no debugger */ while (1); /* hang city if no debugger */
...@@ -657,17 +662,17 @@ ia64_mca_init(void) ...@@ -657,17 +662,17 @@ ia64_mca_init(void)
IA64_MCA_DEBUG("ia64_mca_init: registered mca rendezvous spinloop and wakeup mech.\n"); IA64_MCA_DEBUG("ia64_mca_init: registered mca rendezvous spinloop and wakeup mech.\n");
ia64_mc_info.imi_mca_handler = __pa(mca_hldlr_ptr->fp); ia64_mc_info.imi_mca_handler = ia64_tpa(mca_hldlr_ptr->fp);
/* /*
* XXX - disable SAL checksum by setting size to 0; should be * XXX - disable SAL checksum by setting size to 0; should be
* __pa(ia64_os_mca_dispatch_end) - __pa(ia64_os_mca_dispatch); * ia64_tpa(ia64_os_mca_dispatch_end) - ia64_tpa(ia64_os_mca_dispatch);
*/ */
ia64_mc_info.imi_mca_handler_size = 0; ia64_mc_info.imi_mca_handler_size = 0;
/* Register the os mca handler with SAL */ /* Register the os mca handler with SAL */
if ((rc = ia64_sal_set_vectors(SAL_VECTOR_OS_MCA, if ((rc = ia64_sal_set_vectors(SAL_VECTOR_OS_MCA,
ia64_mc_info.imi_mca_handler, ia64_mc_info.imi_mca_handler,
mca_hldlr_ptr->gp, ia64_tpa(mca_hldlr_ptr->gp),
ia64_mc_info.imi_mca_handler_size, ia64_mc_info.imi_mca_handler_size,
0, 0, 0))) 0, 0, 0)))
{ {
...@@ -677,15 +682,15 @@ ia64_mca_init(void) ...@@ -677,15 +682,15 @@ ia64_mca_init(void)
} }
IA64_MCA_DEBUG("ia64_mca_init: registered os mca handler with SAL at 0x%lx, gp = 0x%lx\n", IA64_MCA_DEBUG("ia64_mca_init: registered os mca handler with SAL at 0x%lx, gp = 0x%lx\n",
ia64_mc_info.imi_mca_handler, mca_hldlr_ptr->gp); ia64_mc_info.imi_mca_handler, ia64_tpa(mca_hldlr_ptr->gp));
/* /*
* XXX - disable SAL checksum by setting size to 0, should be * XXX - disable SAL checksum by setting size to 0, should be
* IA64_INIT_HANDLER_SIZE * IA64_INIT_HANDLER_SIZE
*/ */
ia64_mc_info.imi_monarch_init_handler = __pa(mon_init_ptr->fp); ia64_mc_info.imi_monarch_init_handler = ia64_tpa(mon_init_ptr->fp);
ia64_mc_info.imi_monarch_init_handler_size = 0; ia64_mc_info.imi_monarch_init_handler_size = 0;
ia64_mc_info.imi_slave_init_handler = __pa(slave_init_ptr->fp); ia64_mc_info.imi_slave_init_handler = ia64_tpa(slave_init_ptr->fp);
ia64_mc_info.imi_slave_init_handler_size = 0; ia64_mc_info.imi_slave_init_handler_size = 0;
IA64_MCA_DEBUG("ia64_mca_init: os init handler at %lx\n", IA64_MCA_DEBUG("ia64_mca_init: os init handler at %lx\n",
...@@ -694,10 +699,10 @@ ia64_mca_init(void) ...@@ -694,10 +699,10 @@ ia64_mca_init(void)
/* Register the os init handler with SAL */ /* Register the os init handler with SAL */
if ((rc = ia64_sal_set_vectors(SAL_VECTOR_OS_INIT, if ((rc = ia64_sal_set_vectors(SAL_VECTOR_OS_INIT,
ia64_mc_info.imi_monarch_init_handler, ia64_mc_info.imi_monarch_init_handler,
__pa(ia64_get_gp()), ia64_tpa(ia64_get_gp()),
ia64_mc_info.imi_monarch_init_handler_size, ia64_mc_info.imi_monarch_init_handler_size,
ia64_mc_info.imi_slave_init_handler, ia64_mc_info.imi_slave_init_handler,
__pa(ia64_get_gp()), ia64_tpa(ia64_get_gp()),
ia64_mc_info.imi_slave_init_handler_size))) ia64_mc_info.imi_slave_init_handler_size)))
{ {
printk(KERN_ERR "ia64_mca_init: Failed to register m/s init handlers with SAL. " printk(KERN_ERR "ia64_mca_init: Failed to register m/s init handlers with SAL. "
...@@ -1235,32 +1240,19 @@ device_initcall(ia64_mca_late_init); ...@@ -1235,32 +1240,19 @@ device_initcall(ia64_mca_late_init);
void void
ia64_init_handler (struct pt_regs *pt, struct switch_stack *sw) ia64_init_handler (struct pt_regs *pt, struct switch_stack *sw)
{ {
sal_log_processor_info_t *proc_ptr; pal_min_state_area_t *ms;
ia64_err_rec_t *plog_ptr;
printk(KERN_INFO "Entered OS INIT handler\n"); printk(KERN_INFO "Entered OS INIT handler. PSP=%lx\n",
ia64_sal_to_os_handoff_state.proc_state_param);
/* Get the INIT processor log */
if (!ia64_log_get(SAL_INFO_TYPE_INIT, (prfunc_t)printk))
return; // no record retrieved
#ifdef IA64_DUMP_ALL_PROC_INFO
ia64_log_print(SAL_INFO_TYPE_INIT, (prfunc_t)printk);
#endif
/* /*
* get pointer to min state save area * Address of minstate area provided by PAL is physical,
* * uncacheable (bit 63 set). Convert to Linux virtual
* address in region 6.
*/ */
plog_ptr=(ia64_err_rec_t *)IA64_LOG_CURR_BUFFER(SAL_INFO_TYPE_INIT); ms = (pal_min_state_area_t *)(ia64_sal_to_os_handoff_state.pal_min_state | (6ul<<61));
proc_ptr = &plog_ptr->proc_err;
ia64_process_min_state_save(&SAL_LPI_PSI_INFO(proc_ptr)->min_state_area);
/* Clear the INIT SAL logs now that they have been saved in the OS buffer */
ia64_sal_clear_state_info(SAL_INFO_TYPE_INIT);
init_handler_platform(proc_ptr, pt, sw); /* call platform specific routines */ init_handler_platform(ms, pt, sw); /* call platform specific routines */
} }
/* /*
......
...@@ -50,14 +50,15 @@ ...@@ -50,14 +50,15 @@
* 6. GR12 = Return address to location within SAL_CHECK * 6. GR12 = Return address to location within SAL_CHECK
*/ */
#define SAL_TO_OS_MCA_HANDOFF_STATE_SAVE(_tmp) \ #define SAL_TO_OS_MCA_HANDOFF_STATE_SAVE(_tmp) \
movl _tmp=ia64_sal_to_os_handoff_state;; \ LOAD_PHYSICAL(p0, _tmp, ia64_sal_to_os_handoff_state);; \
DATA_VA_TO_PA(_tmp);; \
st8 [_tmp]=r1,0x08;; \ st8 [_tmp]=r1,0x08;; \
st8 [_tmp]=r8,0x08;; \ st8 [_tmp]=r8,0x08;; \
st8 [_tmp]=r9,0x08;; \ st8 [_tmp]=r9,0x08;; \
st8 [_tmp]=r10,0x08;; \ st8 [_tmp]=r10,0x08;; \
st8 [_tmp]=r11,0x08;; \ st8 [_tmp]=r11,0x08;; \
st8 [_tmp]=r12,0x08 st8 [_tmp]=r12,0x08;; \
st8 [_tmp]=r17,0x08;; \
st8 [_tmp]=r18,0x08
/* /*
* OS_MCA_TO_SAL_HANDOFF_STATE (SAL 3.0 spec) * OS_MCA_TO_SAL_HANDOFF_STATE (SAL 3.0 spec)
...@@ -70,9 +71,8 @@ ...@@ -70,9 +71,8 @@
* returns ptr to SAL rtn save loc in _tmp * returns ptr to SAL rtn save loc in _tmp
*/ */
#define OS_MCA_TO_SAL_HANDOFF_STATE_RESTORE(_tmp) \ #define OS_MCA_TO_SAL_HANDOFF_STATE_RESTORE(_tmp) \
(p6) movl _tmp=ia64_sal_to_os_handoff_state;; \ LOAD_PHYSICAL(p6, _tmp, ia64_sal_to_os_handoff_state);; \
(p7) movl _tmp=ia64_os_to_sal_handoff_state;; \ LOAD_PHYSICAL(p7, _tmp, ia64_os_to_sal_handoff_state);; \
DATA_VA_TO_PA(_tmp);; \
(p6) movl r8=IA64_MCA_COLD_BOOT; \ (p6) movl r8=IA64_MCA_COLD_BOOT; \
(p6) movl r10=IA64_MCA_SAME_CONTEXT; \ (p6) movl r10=IA64_MCA_SAME_CONTEXT; \
(p6) add _tmp=0x18,_tmp;; \ (p6) add _tmp=0x18,_tmp;; \
......
This diff is collapsed.
...@@ -18,7 +18,8 @@ ...@@ -18,7 +18,8 @@
LTOFF22X LTOFF22X
LTOFF22X LTOFF22X
LTOFF_FPTR22 LTOFF_FPTR22
PCREL21B PCREL21B (for br.call only; br.cond is not supported out of modules!)
PCREL60B (for brl.cond only; brl.call is not supported for modules!)
PCREL64LSB PCREL64LSB
SECREL32LSB SECREL32LSB
SEGREL64LSB SEGREL64LSB
...@@ -33,6 +34,7 @@ ...@@ -33,6 +34,7 @@
#include <linux/string.h> #include <linux/string.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <asm/patch.h>
#include <asm/unaligned.h> #include <asm/unaligned.h>
#define ARCH_MODULE_DEBUG 0 #define ARCH_MODULE_DEBUG 0
...@@ -158,27 +160,6 @@ slot (const struct insn *insn) ...@@ -158,27 +160,6 @@ slot (const struct insn *insn)
return (uint64_t) insn & 0x3; return (uint64_t) insn & 0x3;
} }
/* Patch instruction with "val" where "mask" has 1 bits. */
static void
apply (struct insn *insn, uint64_t mask, uint64_t val)
{
uint64_t m0, m1, v0, v1, b0, b1, *b = (uint64_t *) bundle(insn);
# define insn_mask ((1UL << 41) - 1)
unsigned long shift;
b0 = b[0]; b1 = b[1];
shift = 5 + 41 * slot(insn); /* 5 bits of template, then 3 x 41-bit instructions */
if (shift >= 64) {
m1 = mask << (shift - 64);
v1 = val << (shift - 64);
} else {
m0 = mask << shift; m1 = mask >> (64 - shift);
v0 = val << shift; v1 = val >> (64 - shift);
b[0] = (b0 & ~m0) | (v0 & m0);
}
b[1] = (b1 & ~m1) | (v1 & m1);
}
static int static int
apply_imm64 (struct module *mod, struct insn *insn, uint64_t val) apply_imm64 (struct module *mod, struct insn *insn, uint64_t val)
{ {
...@@ -187,12 +168,7 @@ apply_imm64 (struct module *mod, struct insn *insn, uint64_t val) ...@@ -187,12 +168,7 @@ apply_imm64 (struct module *mod, struct insn *insn, uint64_t val)
mod->name, slot(insn)); mod->name, slot(insn));
return 0; return 0;
} }
apply(insn, 0x01fffefe000, ( ((val & 0x8000000000000000) >> 27) /* bit 63 -> 36 */ ia64_patch_imm64((u64) insn, val);
| ((val & 0x0000000000200000) << 0) /* bit 21 -> 21 */
| ((val & 0x00000000001f0000) << 6) /* bit 16 -> 22 */
| ((val & 0x000000000000ff80) << 20) /* bit 7 -> 27 */
| ((val & 0x000000000000007f) << 13) /* bit 0 -> 13 */));
apply((void *) insn - 1, 0x1ffffffffff, val >> 22);
return 1; return 1;
} }
...@@ -208,9 +184,7 @@ apply_imm60 (struct module *mod, struct insn *insn, uint64_t val) ...@@ -208,9 +184,7 @@ apply_imm60 (struct module *mod, struct insn *insn, uint64_t val)
printk(KERN_ERR "%s: value %ld out of IMM60 range\n", mod->name, (int64_t) val); printk(KERN_ERR "%s: value %ld out of IMM60 range\n", mod->name, (int64_t) val);
return 0; return 0;
} }
apply(insn, 0x011ffffe000, ( ((val & 0x1000000000000000) >> 24) /* bit 60 -> 36 */ ia64_patch_imm60((u64) insn, val);
| ((val & 0x00000000000fffff) << 13) /* bit 0 -> 13 */));
apply((void *) insn - 1, 0x1fffffffffc, val >> 18);
return 1; return 1;
} }
...@@ -221,10 +195,10 @@ apply_imm22 (struct module *mod, struct insn *insn, uint64_t val) ...@@ -221,10 +195,10 @@ apply_imm22 (struct module *mod, struct insn *insn, uint64_t val)
printk(KERN_ERR "%s: value %li out of IMM22 range\n", mod->name, (int64_t)val); printk(KERN_ERR "%s: value %li out of IMM22 range\n", mod->name, (int64_t)val);
return 0; return 0;
} }
apply(insn, 0x01fffcfe000, ( ((val & 0x200000) << 15) /* bit 21 -> 36 */ ia64_patch((u64) insn, 0x01fffcfe000, ( ((val & 0x200000) << 15) /* bit 21 -> 36 */
| ((val & 0x1f0000) << 6) /* bit 16 -> 22 */ | ((val & 0x1f0000) << 6) /* bit 16 -> 22 */
| ((val & 0x00ff80) << 20) /* bit 7 -> 27 */ | ((val & 0x00ff80) << 20) /* bit 7 -> 27 */
| ((val & 0x00007f) << 13) /* bit 0 -> 13 */)); | ((val & 0x00007f) << 13) /* bit 0 -> 13 */));
return 1; return 1;
} }
...@@ -235,8 +209,8 @@ apply_imm21b (struct module *mod, struct insn *insn, uint64_t val) ...@@ -235,8 +209,8 @@ apply_imm21b (struct module *mod, struct insn *insn, uint64_t val)
printk(KERN_ERR "%s: value %li out of IMM21b range\n", mod->name, (int64_t)val); printk(KERN_ERR "%s: value %li out of IMM21b range\n", mod->name, (int64_t)val);
return 0; return 0;
} }
apply(insn, 0x11ffffe000, ( ((val & 0x100000) << 16) /* bit 20 -> 36 */ ia64_patch((u64) insn, 0x11ffffe000, ( ((val & 0x100000) << 16) /* bit 20 -> 36 */
| ((val & 0x0fffff) << 13) /* bit 0 -> 13 */)); | ((val & 0x0fffff) << 13) /* bit 0 -> 13 */));
return 1; return 1;
} }
...@@ -281,7 +255,7 @@ plt_target (struct plt_entry *plt) ...@@ -281,7 +255,7 @@ plt_target (struct plt_entry *plt)
b0 = b[0]; b1 = b[1]; b0 = b[0]; b1 = b[1];
off = ( ((b1 & 0x00fffff000000000) >> 36) /* imm20b -> bit 0 */ off = ( ((b1 & 0x00fffff000000000) >> 36) /* imm20b -> bit 0 */
| ((b0 >> 48) << 20) | ((b1 & 0x7fffff) << 36) /* imm39 -> bit 20 */ | ((b0 >> 48) << 20) | ((b1 & 0x7fffff) << 36) /* imm39 -> bit 20 */
| ((b1 & 0x0800000000000000) << 1)); /* i -> bit 60 */ | ((b1 & 0x0800000000000000) << 0)); /* i -> bit 59 */
return (long) plt->bundle[1] + 16*off; return (long) plt->bundle[1] + 16*off;
} }
...@@ -751,7 +725,7 @@ do_reloc (struct module *mod, uint8_t r_type, Elf64_Sym *sym, uint64_t addend, ...@@ -751,7 +725,7 @@ do_reloc (struct module *mod, uint8_t r_type, Elf64_Sym *sym, uint64_t addend,
if (gp_addressable(mod, val)) { if (gp_addressable(mod, val)) {
/* turn "ld8" into "mov": */ /* turn "ld8" into "mov": */
DEBUGP("%s: patching ld8 at %p to mov\n", __FUNCTION__, location); DEBUGP("%s: patching ld8 at %p to mov\n", __FUNCTION__, location);
apply(location, 0x1fff80fe000, 0x10000000000); ia64_patch((u64) location, 0x1fff80fe000, 0x10000000000);
} }
return 0; return 0;
...@@ -889,7 +863,8 @@ module_arch_cleanup (struct module *mod) ...@@ -889,7 +863,8 @@ module_arch_cleanup (struct module *mod)
} }
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
void percpu_modcopy(void *pcpudst, const void *src, unsigned long size) void
percpu_modcopy (void *pcpudst, const void *src, unsigned long size)
{ {
unsigned int i; unsigned int i;
for (i = 0; i < NR_CPUS; i++) for (i = 0; i < NR_CPUS; i++)
......
...@@ -164,7 +164,7 @@ GLOBAL_ENTRY(ia64_pal_call_phys_static) ...@@ -164,7 +164,7 @@ GLOBAL_ENTRY(ia64_pal_call_phys_static)
;; ;;
mov loc4=ar.rsc // save RSE configuration mov loc4=ar.rsc // save RSE configuration
dep.z loc2=loc2,0,61 // convert pal entry point to physical dep.z loc2=loc2,0,61 // convert pal entry point to physical
dep.z r8=r8,0,61 // convert rp to physical tpa r8=r8 // convert rp to physical
;; ;;
mov b7 = loc2 // install target to branch reg mov b7 = loc2 // install target to branch reg
mov ar.rsc=0 // put RSE in enforced lazy, LE mode mov ar.rsc=0 // put RSE in enforced lazy, LE mode
...@@ -174,13 +174,13 @@ GLOBAL_ENTRY(ia64_pal_call_phys_static) ...@@ -174,13 +174,13 @@ GLOBAL_ENTRY(ia64_pal_call_phys_static)
or loc3=loc3,r17 // add in psr the bits to set or loc3=loc3,r17 // add in psr the bits to set
;; ;;
andcm r16=loc3,r16 // removes bits to clear from psr andcm r16=loc3,r16 // removes bits to clear from psr
br.call.sptk.many rp=ia64_switch_mode br.call.sptk.many rp=ia64_switch_mode_phys
.ret1: mov rp = r8 // install return address (physical) .ret1: mov rp = r8 // install return address (physical)
br.cond.sptk.many b7 br.cond.sptk.many b7
1: 1:
mov ar.rsc=0 // put RSE in enforced lazy, LE mode mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov r16=loc3 // r16= original psr mov r16=loc3 // r16= original psr
br.call.sptk.many rp=ia64_switch_mode // return to virtual mode br.call.sptk.many rp=ia64_switch_mode_virt // return to virtual mode
.ret2: .ret2:
mov psr.l = loc3 // restore init PSR mov psr.l = loc3 // restore init PSR
...@@ -228,13 +228,13 @@ GLOBAL_ENTRY(ia64_pal_call_phys_stacked) ...@@ -228,13 +228,13 @@ GLOBAL_ENTRY(ia64_pal_call_phys_stacked)
mov b7 = loc2 // install target to branch reg mov b7 = loc2 // install target to branch reg
;; ;;
andcm r16=loc3,r16 // removes bits to clear from psr andcm r16=loc3,r16 // removes bits to clear from psr
br.call.sptk.many rp=ia64_switch_mode br.call.sptk.many rp=ia64_switch_mode_phys
.ret6: .ret6:
br.call.sptk.many rp=b7 // now make the call br.call.sptk.many rp=b7 // now make the call
.ret7: .ret7:
mov ar.rsc=0 // put RSE in enforced lazy, LE mode mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov r16=loc3 // r16= original psr mov r16=loc3 // r16= original psr
br.call.sptk.many rp=ia64_switch_mode // return to virtual mode br.call.sptk.many rp=ia64_switch_mode_virt // return to virtual mode
.ret8: mov psr.l = loc3 // restore init PSR .ret8: mov psr.l = loc3 // restore init PSR
mov ar.pfs = loc1 mov ar.pfs = loc1
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -352,10 +352,10 @@ static struct task_struct * __init ...@@ -352,10 +352,10 @@ static struct task_struct * __init
fork_by_hand (void) fork_by_hand (void)
{ {
/* /*
* don't care about the eip and regs settings since we'll never reschedule the * Don't care about the IP and regs settings since we'll never reschedule the
* forked task. * forked task.
*/ */
return do_fork(CLONE_VM|CLONE_IDLETASK, 0, 0, 0, NULL, NULL); return copy_process(CLONE_VM|CLONE_IDLETASK, 0, 0, 0, NULL, NULL);
} }
static int __init static int __init
...@@ -370,6 +370,7 @@ do_boot_cpu (int sapicid, int cpu) ...@@ -370,6 +370,7 @@ do_boot_cpu (int sapicid, int cpu)
idle = fork_by_hand(); idle = fork_by_hand();
if (IS_ERR(idle)) if (IS_ERR(idle))
panic("failed fork for CPU %d", cpu); panic("failed fork for CPU %d", cpu);
wake_up_forked_process(idle);
/* /*
* We remove it from the pidhash and the runqueue * We remove it from the pidhash and the runqueue
...@@ -449,7 +450,7 @@ smp_build_cpu_map (void) ...@@ -449,7 +450,7 @@ smp_build_cpu_map (void)
for (cpu = 1, i = 0; i < smp_boot_data.cpu_count; i++) { for (cpu = 1, i = 0; i < smp_boot_data.cpu_count; i++) {
sapicid = smp_boot_data.cpu_phys_id[i]; sapicid = smp_boot_data.cpu_phys_id[i];
if (sapicid == -1 || sapicid == boot_cpu_id) if (sapicid == boot_cpu_id)
continue; continue;
phys_cpu_present_map |= (1 << cpu); phys_cpu_present_map |= (1 << cpu);
ia64_cpu_to_sapicid[cpu] = sapicid; ia64_cpu_to_sapicid[cpu] = sapicid;
...@@ -598,7 +599,7 @@ init_smp_config(void) ...@@ -598,7 +599,7 @@ init_smp_config(void)
/* Tell SAL where to drop the AP's. */ /* Tell SAL where to drop the AP's. */
ap_startup = (struct fptr *) start_ap; ap_startup = (struct fptr *) start_ap;
sal_ret = ia64_sal_set_vectors(SAL_VECTOR_OS_BOOT_RENDEZ, sal_ret = ia64_sal_set_vectors(SAL_VECTOR_OS_BOOT_RENDEZ,
__pa(ap_startup->fp), __pa(ap_startup->gp), 0, 0, 0, 0); ia64_tpa(ap_startup->fp), ia64_tpa(ap_startup->gp), 0, 0, 0, 0);
if (sal_ret < 0) if (sal_ret < 0)
printk(KERN_ERR "SMP: Can't set SAL AP Boot Rendezvous: %s\n", printk(KERN_ERR "SMP: Can't set SAL AP Boot Rendezvous: %s\n",
ia64_sal_strerror(sal_ret)); ia64_sal_strerror(sal_ret));
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment