Commit 0068306f authored by Linus Torvalds's avatar Linus Torvalds

Merge http://mdomsch.bkbits.net/linux-2.5-edd

into home.osdl.org:/home/torvalds/v2.5/linux
parents 0c13346e bd2fdafa
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
----------------------------------- -----------------------------------
Started: 13-Jan-2003 Started: 13-Jan-2003
Last update: 11-Feb-2003 Last update: 27-Sep-2003
David Mosberger-Tang David Mosberger-Tang
<davidm@hpl.hp.com> <davidm@hpl.hp.com>
...@@ -146,6 +146,12 @@ speed comes a set of restrictions: ...@@ -146,6 +146,12 @@ speed comes a set of restrictions:
task pointer is not considered sensitive: it's already exposed task pointer is not considered sensitive: it's already exposed
through ar.k6). through ar.k6).
o Fsyscall-handlers MUST NOT access user-memory without first
validating access-permission (this can be done typically via
probe.r.fault and/or probe.w.fault) and without guarding against
memory access exceptions (this can be done with the EX() macros
defined by asmmacro.h).
The above restrictions may seem draconian, but remember that it's The above restrictions may seem draconian, but remember that it's
possible to trade off some of the restrictions by paying a slightly possible to trade off some of the restrictions by paying a slightly
higher overhead. For example, if an fsyscall-handler could benefit higher overhead. For example, if an fsyscall-handler could benefit
...@@ -229,3 +235,52 @@ PSR.ed Unchanged. Note: This bit could only have an effect if an fsys-mode ...@@ -229,3 +235,52 @@ PSR.ed Unchanged. Note: This bit could only have an effect if an fsys-mode
PSR.bn Unchanged. Note: fsys-mode handlers may clear the bit, if needed. PSR.bn Unchanged. Note: fsys-mode handlers may clear the bit, if needed.
Doing so requires clearing PSR.i and PSR.ic as well. Doing so requires clearing PSR.i and PSR.ic as well.
PSR.ia Unchanged. Note: the ia64 linux kernel never sets this bit. PSR.ia Unchanged. Note: the ia64 linux kernel never sets this bit.
* Using fast system calls
To use fast system calls, userspace applications need simply call
__kernel_syscall_via_epc(). For example
-- example fgettimeofday() call --
-- fgettimeofday.S --
#include <asm/asmmacro.h>
GLOBAL_ENTRY(fgettimeofday)
.prologue
.save ar.pfs, r11
mov r11 = ar.pfs
.body
mov r2 = 0xa000000000020660;; // gate address
// found by inspection of System.map for the
// __kernel_syscall_via_epc() function. See
// below for how to do this for real.
mov b7 = r2
mov r15 = 1087 // gettimeofday syscall
;;
br.call.sptk.many b6 = b7
;;
.restore sp
mov ar.pfs = r11
br.ret.sptk.many rp;; // return to caller
END(fgettimeofday)
-- end fgettimeofday.S --
In reality, getting the gate address is accomplished by two extra
values passed via the ELF auxiliary vector (include/asm-ia64/elf.h)
o AT_SYSINFO : is the address of __kernel_syscall_via_epc()
o AT_SYSINFO_EHDR : is the address of the kernel gate ELF DSO
The ELF DSO is a pre-linked library that is mapped in by the kernel at
the gate page. It is a proper ELF shared object so, with a dynamic
loader that recognises the library, you should be able to make calls to
the exported functions within it as with any other shared library.
AT_SYSINFO points into the kernel DSO at the
__kernel_syscall_via_epc() function for historical reasons (it was
used before the kernel DSO) and as a convenience.
...@@ -149,7 +149,7 @@ maketools: include/asm-arm/.arch \ ...@@ -149,7 +149,7 @@ maketools: include/asm-arm/.arch \
bzImage: vmlinux bzImage: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(boot)/zImage $(Q)$(MAKE) $(build)=$(boot) $(boot)/zImage
zImage Image bootpImage: vmlinux zImage Image bootpImage uImage: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@ $(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
zinstall install: vmlinux zinstall install: vmlinux
......
...@@ -8,6 +8,8 @@ ...@@ -8,6 +8,8 @@
# Copyright (C) 1995-2002 Russell King # Copyright (C) 1995-2002 Russell King
# #
MKIMAGE := $(srctree)/scripts/mkuboot.sh
# Note: the following conditions must always be true: # Note: the following conditions must always be true:
# ZRELADDR == virt_to_phys(TEXTADDR) # ZRELADDR == virt_to_phys(TEXTADDR)
# PARAMS_PHYS must be with 4MB of ZRELADDR # PARAMS_PHYS must be with 4MB of ZRELADDR
...@@ -42,12 +44,14 @@ initrd_phys-$(CONFIG_ARCH_CDB89712) := 0x00700000 ...@@ -42,12 +44,14 @@ initrd_phys-$(CONFIG_ARCH_CDB89712) := 0x00700000
ifeq ($(CONFIG_ARCH_SA1100),y) ifeq ($(CONFIG_ARCH_SA1100),y)
zreladdr-$(CONFIG_SA1111) := 0xc0208000 zreladdr-$(CONFIG_SA1111) := 0xc0208000
endif endif
params_phys-$(CONFIG_ARCH_SA1100) := 0xc0000100
initrd_phys-$(CONFIG_ARCH_SA1100) := 0xc0800000
zreladdr-$(CONFIG_ARCH_PXA) := 0xa0008000 zreladdr-$(CONFIG_ARCH_PXA) := 0xa0008000
zreladdr-$(CONFIG_ARCH_ANAKIN) := 0x20008000 zreladdr-$(CONFIG_ARCH_ANAKIN) := 0x20008000
zreladdr-$(CONFIG_ARCH_IOP3XX) := 0xa0008000 zreladdr-$(CONFIG_ARCH_IOP3XX) := 0xa0008000
params-phys-$(CONFIG_ARCH_IOP3XX) := 0xa0000100 params_phys-$(CONFIG_ARCH_IOP3XX) := 0xa0000100
zreladdr-$(CONFIG_ARCH_ADIFCC) := 0xc0008000 zreladdr-$(CONFIG_ARCH_ADIFCC) := 0xc0008000
params-phys-$(CONFIG_ARCH_ADIFCC) := 0xc0000100 params_phys-$(CONFIG_ARCH_ADIFCC) := 0xc0000100
ZRELADDR := $(zreladdr-y) ZRELADDR := $(zreladdr-y)
ZTEXTADDR := $(ztextaddr-y) ZTEXTADDR := $(ztextaddr-y)
...@@ -78,6 +82,16 @@ $(obj)/zImage: $(obj)/compressed/vmlinux FORCE ...@@ -78,6 +82,16 @@ $(obj)/zImage: $(obj)/compressed/vmlinux FORCE
$(call if_changed,objcopy) $(call if_changed,objcopy)
@echo ' Kernel: $@ is ready' @echo ' Kernel: $@ is ready'
quite_cmd_uimage = UIMAGE $@
cmd_uimage = $(CONFIG_SHELL) $(MKIMAGE) -A arm -O linux -T kernel \
-C none -a $(ZRELADDR) -e $(ZRELADDR) \
-n 'Linux-$(KERNELRELEASE)' -d $< $@
targets += uImage
$(obj)/uImage: $(obj)/zImage
$(call if_changed,uimage)
@echo ' Image $@ is ready'
$(obj)/bootpImage: $(obj)/bootp/bootp FORCE $(obj)/bootpImage: $(obj)/bootp/bootp FORCE
$(call if_changed,objcopy) $(call if_changed,objcopy)
@echo ' Kernel: $@ is ready' @echo ' Kernel: $@ is ready'
...@@ -86,7 +100,7 @@ $(obj)/compressed/vmlinux: vmlinux FORCE ...@@ -86,7 +100,7 @@ $(obj)/compressed/vmlinux: vmlinux FORCE
$(Q)$(MAKE) $(build)=$(obj)/compressed $@ $(Q)$(MAKE) $(build)=$(obj)/compressed $@
$(obj)/bootp/bootp: $(obj)/zImage initrd FORCE $(obj)/bootp/bootp: $(obj)/zImage initrd FORCE
$(Q)$(MAKE) $(build)=$(obj)/compressed $@ $(Q)$(MAKE) $(build)=$(obj)/bootp $@
.PHONY: initrd .PHONY: initrd
initrd: initrd:
......
...@@ -16,10 +16,10 @@ SECTIONS ...@@ -16,10 +16,10 @@ SECTIONS
.text : { .text : {
_stext = .; _stext = .;
*(.start) *(.start)
kernel.o arch/arm/boot/bootp/kernel.o
. = ALIGN(32); . = ALIGN(32);
initrd_start = .; initrd_start = .;
initrd.o arch/arm/boot/bootp/initrd.o
initrd_len = . - initrd_start; initrd_len = . - initrd_start;
. = ALIGN(32); . = ALIGN(32);
_etext = .; _etext = .;
......
...@@ -131,6 +131,7 @@ EXPORT_SYMBOL(fp_init); ...@@ -131,6 +131,7 @@ EXPORT_SYMBOL(fp_init);
EXPORT_SYMBOL(__machine_arch_type); EXPORT_SYMBOL(__machine_arch_type);
/* networking */ /* networking */
EXPORT_SYMBOL(csum_partial);
EXPORT_SYMBOL(csum_partial_copy_nocheck); EXPORT_SYMBOL(csum_partial_copy_nocheck);
EXPORT_SYMBOL(__csum_ipv6_magic); EXPORT_SYMBOL(__csum_ipv6_magic);
......
...@@ -30,6 +30,8 @@ ENTRY(do_div64) ...@@ -30,6 +30,8 @@ ENTRY(do_div64)
moveq lr, #1 @ only divide low bits moveq lr, #1 @ only divide low bits
moveq nh, onl moveq nh, onl
tst dh, #0x80000000
bne 2f
1: cmp nh, dh 1: cmp nh, dh
bls 2f bls 2f
add lr, lr, #1 add lr, lr, #1
......
...@@ -341,8 +341,8 @@ config UNIX98_PTY_COUNT ...@@ -341,8 +341,8 @@ config UNIX98_PTY_COUNT
endmenu endmenu
#source drivers/misc/Config.in
source "drivers/media/Kconfig" source "drivers/media/Kconfig"
source "sound/Kconfig"
source "fs/Kconfig" source "fs/Kconfig"
......
uClinux-2.4 for H8/300 README linux-2.6 for H8/300 README
Yoshinori Sato <ysato@users.sourceforge.jp> Yoshinori Sato <ysato@users.sourceforge.jp>
* Supported CPU * Supported CPU
H8/300H H8/300H and H8S
H8S is planning.
* Supported Target * Supported Target
1.simulator of GDB 1.simulator of GDB
...@@ -15,8 +14,11 @@ H8S is planning. ...@@ -15,8 +14,11 @@ H8S is planning.
Akizuki Denshi Tsusho Ltd. <http://www.akizuki.ne.jp> (Japanese Only) Akizuki Denshi Tsusho Ltd. <http://www.akizuki.ne.jp> (Japanese Only)
3.H8MAX 3.H8MAX
Under development see http://ip-sol.jp/h8max/ (Japanese Only)
see http://www.strawberry-linux.com (Japanese Only)
4.EDOSK2674
see http://www.eu.renesas.com/products/mpumcu/tool/edk/support/edosk2674.html
http://www.azpower.com/H8-uClinux/
* Toolchain Version * Toolchain Version
gcc-3.1 or higher and patch gcc-3.1 or higher and patch
...@@ -26,10 +28,10 @@ gdb-5.2 or higher ...@@ -26,10 +28,10 @@ gdb-5.2 or higher
The environment that can compile a h8300-elf binary is necessary. The environment that can compile a h8300-elf binary is necessary.
* Userland Develop environment * Userland Develop environment
Tempolary used h8300-hms(h8300-coff) Toolchain. used h8300-elf toolchains.
I prepare toolchain corresponding to h8300-elf. see http://www.uclinux.org/pub/uClinux/ports/h8/
* A few words of thanks * A few words of thanks
Porting to H8/300H is support of Information-technology Promotion Agency, Japan. Porting to H8/300 serieses is support of Information-technology Promotion Agency, Japan.
I thank support. I thank support.
and All developer/user. and All developer/user.
/* romfs move to __ebss */ /* romfs move to __ebss */
#include <asm/linkage.h> #include <asm/linkage.h>
#include <linux/config.h>
#if defined(__H8300H__) #if defined(__H8300H__)
.h8300h .h8300h
...@@ -9,6 +10,8 @@ ...@@ -9,6 +10,8 @@
.h8300s .h8300s
#endif #endif
#define BLKOFFSET 512
.text .text
.globl __move_romfs .globl __move_romfs
_romfs_sig_len = 8 _romfs_sig_len = 8
...@@ -31,6 +34,9 @@ __move_romfs: ...@@ -31,6 +34,9 @@ __move_romfs:
add.l er0,er1 /* romfs image end */ add.l er0,er1 /* romfs image end */
mov.l #__ebss,er2 mov.l #__ebss,er2
add.l er0,er2 /* distination address */ add.l er0,er2 /* distination address */
#if defined(CONFIG_INTELFLASH)
add.l #BLKOFFSET,er2
#endif
adds #2,er0 adds #2,er0
adds #1,er0 adds #1,er0
shlr er0 shlr er0
......
/* /*
* linux/arch/h8300/platform/h8sh/ints.c * linux/arch/h8300/platform/h8s/ints.c
* *
* Yoshinori Sato <ysato@users.sourceforge.jp> * Yoshinori Sato <ysato@users.sourceforge.jp>
* *
...@@ -20,7 +20,6 @@ ...@@ -20,7 +20,6 @@
#include <linux/kernel_stat.h> #include <linux/kernel_stat.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/slab.h>
#include <linux/bootmem.h> #include <linux/bootmem.h>
#include <linux/random.h> #include <linux/random.h>
...@@ -75,7 +74,7 @@ const static struct irq_pins irq_assign_table1[16]={ ...@@ -75,7 +74,7 @@ const static struct irq_pins irq_assign_table1[16]={
{H8300_GPIO_P2,H8300_GPIO_B6},{H8300_GPIO_P2,H8300_GPIO_B7}, {H8300_GPIO_P2,H8300_GPIO_B6},{H8300_GPIO_P2,H8300_GPIO_B7},
}; };
static int use_kmalloc; static short use_kmalloc = 0;
extern unsigned long *interrupt_redirect_table; extern unsigned long *interrupt_redirect_table;
......
...@@ -533,14 +533,19 @@ static int lapic_resume(struct sys_device *dev) ...@@ -533,14 +533,19 @@ static int lapic_resume(struct sys_device *dev)
if (!apic_pm_state.active) if (!apic_pm_state.active)
return 0; return 0;
/* XXX: Pavel needs this for S3 resume, but can't explain why */
set_fixmap_nocache(FIX_APIC_BASE, APIC_DEFAULT_PHYS_BASE);
local_irq_save(flags); local_irq_save(flags);
/*
* Make sure the APICBASE points to the right address
*
* FIXME! This will be wrong if we ever support suspend on
* SMP! We'll need to do this as part of the CPU restore!
*/
rdmsr(MSR_IA32_APICBASE, l, h); rdmsr(MSR_IA32_APICBASE, l, h);
l &= ~MSR_IA32_APICBASE_BASE; l &= ~MSR_IA32_APICBASE_BASE;
l |= MSR_IA32_APICBASE_ENABLE | APIC_DEFAULT_PHYS_BASE; l |= MSR_IA32_APICBASE_ENABLE | mp_lapic_addr;
wrmsr(MSR_IA32_APICBASE, l, h); wrmsr(MSR_IA32_APICBASE, l, h);
apic_write(APIC_LVTERR, ERROR_APIC_VECTOR | APIC_LVT_MASKED); apic_write(APIC_LVTERR, ERROR_APIC_VECTOR | APIC_LVT_MASKED);
apic_write(APIC_ID, apic_pm_state.apic_id); apic_write(APIC_ID, apic_pm_state.apic_id);
apic_write(APIC_DFR, apic_pm_state.apic_dfr); apic_write(APIC_DFR, apic_pm_state.apic_dfr);
...@@ -680,6 +685,12 @@ static int __init detect_init_APIC (void) ...@@ -680,6 +685,12 @@ static int __init detect_init_APIC (void)
} }
set_bit(X86_FEATURE_APIC, boot_cpu_data.x86_capability); set_bit(X86_FEATURE_APIC, boot_cpu_data.x86_capability);
mp_lapic_addr = APIC_DEFAULT_PHYS_BASE; mp_lapic_addr = APIC_DEFAULT_PHYS_BASE;
/* The BIOS may have set up the APIC at some other address */
rdmsr(MSR_IA32_APICBASE, l, h);
if (l & MSR_IA32_APICBASE_ENABLE)
mp_lapic_addr = l & MSR_IA32_APICBASE_BASE;
if (nmi_watchdog != NMI_NONE) if (nmi_watchdog != NMI_NONE)
nmi_watchdog = NMI_LOCAL_APIC; nmi_watchdog = NMI_LOCAL_APIC;
......
...@@ -380,7 +380,7 @@ void enable_irq(unsigned int irq) ...@@ -380,7 +380,7 @@ void enable_irq(unsigned int irq)
spin_lock_irqsave(&desc->lock, flags); spin_lock_irqsave(&desc->lock, flags);
switch (desc->depth) { switch (desc->depth) {
case 1: { case 1: {
unsigned int status = desc->status & ~IRQ_DISABLED; unsigned int status = desc->status & ~(IRQ_DISABLED | IRQ_INPROGRESS);
desc->status = status; desc->status = status;
if ((status & (IRQ_PENDING | IRQ_REPLAY)) == IRQ_PENDING) { if ((status & (IRQ_PENDING | IRQ_REPLAY)) == IRQ_PENDING) {
desc->status = status | IRQ_REPLAY; desc->status = status | IRQ_REPLAY;
......
This diff is collapsed.
...@@ -337,6 +337,16 @@ static void __init smp_read_mpc_oem(struct mp_config_oemtable *oemtable, \ ...@@ -337,6 +337,16 @@ static void __init smp_read_mpc_oem(struct mp_config_oemtable *oemtable, \
} }
} }
} }
static inline void mps_oem_check(struct mp_config_table *mpc, char *oem,
char *productid)
{
if (strncmp(oem, "IBM NUMA", 8))
printk("Warning! May not be a NUMA-Q system!\n");
if (mpc->mpc_oemptr)
smp_read_mpc_oem((struct mp_config_oemtable *) mpc->mpc_oemptr,
mpc->mpc_oemsize);
}
#endif /* CONFIG_X86_NUMAQ */ #endif /* CONFIG_X86_NUMAQ */
/* /*
......
...@@ -104,7 +104,7 @@ void show_trace(struct task_struct *task, unsigned long * stack) ...@@ -104,7 +104,7 @@ void show_trace(struct task_struct *task, unsigned long * stack)
#ifdef CONFIG_KALLSYMS #ifdef CONFIG_KALLSYMS
printk("\n"); printk("\n");
#endif #endif
while (((long) stack & (THREAD_SIZE-1)) != 0) { while (!kstack_end(stack)) {
addr = *stack++; addr = *stack++;
if (kernel_text_address(addr)) { if (kernel_text_address(addr)) {
printk(" [<%08lx>] ", addr); printk(" [<%08lx>] ", addr);
...@@ -138,7 +138,7 @@ void show_stack(struct task_struct *task, unsigned long *esp) ...@@ -138,7 +138,7 @@ void show_stack(struct task_struct *task, unsigned long *esp)
stack = esp; stack = esp;
for(i = 0; i < kstack_depth_to_print; i++) { for(i = 0; i < kstack_depth_to_print; i++) {
if (((long) stack & (THREAD_SIZE-1)) == 0) if (kstack_end(stack))
break; break;
if (i && ((i % 8) == 0)) if (i && ((i % 8) == 0))
printk("\n "); printk("\n ");
...@@ -374,6 +374,9 @@ DO_ERROR_INFO(17, SIGBUS, "alignment check", alignment_check, BUS_ADRALN, get_cr ...@@ -374,6 +374,9 @@ DO_ERROR_INFO(17, SIGBUS, "alignment check", alignment_check, BUS_ADRALN, get_cr
asmlinkage void do_general_protection(struct pt_regs * regs, long error_code) asmlinkage void do_general_protection(struct pt_regs * regs, long error_code)
{ {
if (regs->eflags & X86_EFLAGS_IF)
local_irq_enable();
if (regs->eflags & VM_MASK) if (regs->eflags & VM_MASK)
goto gp_in_vm86; goto gp_in_vm86;
...@@ -386,6 +389,7 @@ asmlinkage void do_general_protection(struct pt_regs * regs, long error_code) ...@@ -386,6 +389,7 @@ asmlinkage void do_general_protection(struct pt_regs * regs, long error_code)
return; return;
gp_in_vm86: gp_in_vm86:
local_irq_enable();
handle_vm86_fault((struct kernel_vm86_regs *) regs, error_code); handle_vm86_fault((struct kernel_vm86_regs *) regs, error_code);
return; return;
......
...@@ -223,7 +223,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long error_code) ...@@ -223,7 +223,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long error_code)
__asm__("movl %%cr2,%0":"=r" (address)); __asm__("movl %%cr2,%0":"=r" (address));
/* It's safe to allow irq's after cr2 has been saved */ /* It's safe to allow irq's after cr2 has been saved */
if (regs->eflags & X86_EFLAGS_IF) if (regs->eflags & (X86_EFLAGS_IF|VM_MASK))
local_irq_enable(); local_irq_enable();
tsk = current; tsk = current;
......
...@@ -57,6 +57,10 @@ choice ...@@ -57,6 +57,10 @@ choice
config IA64_GENERIC config IA64_GENERIC
bool "generic" bool "generic"
select NUMA
select ACPI_NUMA
select VIRTUAL_MEM_MAP
select DISCONTIGMEM
---help--- ---help---
This selects the system type of your hardware. A "generic" kernel This selects the system type of your hardware. A "generic" kernel
will run on any supported IA-64 system. However, if you configure will run on any supported IA-64 system. However, if you configure
...@@ -220,24 +224,8 @@ config NUMA ...@@ -220,24 +224,8 @@ config NUMA
Access). This option is for configuring high-end multiprocessor Access). This option is for configuring high-end multiprocessor
server systems. If in doubt, say N. server systems. If in doubt, say N.
choice
prompt "Maximum Memory per NUMA Node" if NUMA && IA64_DIG
depends on NUMA && IA64_DIG
default IA64_NODESIZE_16GB
config IA64_NODESIZE_16GB
bool "16GB"
config IA64_NODESIZE_64GB
bool "64GB"
config IA64_NODESIZE_256GB
bool "256GB"
endchoice
config DISCONTIGMEM config DISCONTIGMEM
bool "Discontiguous memory support" if (IA64_DIG || IA64_SGI_SN2 || IA64_GENERIC) && NUMA bool "Discontiguous memory support" if (IA64_DIG || IA64_SGI_SN2 || IA64_GENERIC) && NUMA && VIRTUAL_MEM_MAP
default y if (IA64_SGI_SN2 || IA64_GENERIC) && NUMA default y if (IA64_SGI_SN2 || IA64_GENERIC) && NUMA
help help
Say Y to support efficient handling of discontiguous physical memory, Say Y to support efficient handling of discontiguous physical memory,
...@@ -250,14 +238,10 @@ config VIRTUAL_MEM_MAP ...@@ -250,14 +238,10 @@ config VIRTUAL_MEM_MAP
default y if !IA64_HP_SIM default y if !IA64_HP_SIM
help help
Say Y to compile the kernel with support for a virtual mem map. Say Y to compile the kernel with support for a virtual mem map.
This is an alternate method of supporting large holes in the This code also only takes effect if a memory hole of greater than
physical address space on non NUMA machines. Since the DISCONTIGMEM 1 Gb is found during boot. You must turn this option on if you
option is not supported on machines with the ZX1 chipset, this is require the DISCONTIGMEM option for your machine. If you are
the only way of supporting more than 1 Gb of memory on those unsure, say Y.
machines. This code also only takes effect if a memory hole of
greater than 1 Gb is found during boot, so it is safe to enable
unless you require the DISCONTIGMEM option for your machine. If you
are unsure, say Y.
config IA64_MCA config IA64_MCA
bool "Enable IA-64 Machine Check Abort" bool "Enable IA-64 Machine Check Abort"
......
...@@ -64,7 +64,7 @@ core-$(CONFIG_IA64_SGI_SN2) += arch/ia64/sn/ ...@@ -64,7 +64,7 @@ core-$(CONFIG_IA64_SGI_SN2) += arch/ia64/sn/
drivers-$(CONFIG_PCI) += arch/ia64/pci/ drivers-$(CONFIG_PCI) += arch/ia64/pci/
drivers-$(CONFIG_IA64_HP_SIM) += arch/ia64/hp/sim/ drivers-$(CONFIG_IA64_HP_SIM) += arch/ia64/hp/sim/
drivers-$(CONFIG_IA64_HP_ZX1) += arch/ia64/hp/common/ arch/ia64/hp/zx1/ drivers-$(CONFIG_IA64_HP_ZX1) += arch/ia64/hp/common/ arch/ia64/hp/zx1/
drivers-$(CONFIG_IA64_GENERIC) += arch/ia64/hp/common/ arch/ia64/hp/zx1/ arch/ia64/hp/sim/ drivers-$(CONFIG_IA64_GENERIC) += arch/ia64/hp/common/ arch/ia64/hp/zx1/ arch/ia64/hp/sim/ arch/ia64/sn/
drivers-$(CONFIG_OPROFILE) += arch/ia64/oprofile/ drivers-$(CONFIG_OPROFILE) += arch/ia64/oprofile/
boot := arch/ia64/hp/sim/boot boot := arch/ia64/hp/sim/boot
......
...@@ -2486,11 +2486,14 @@ static int ...@@ -2486,11 +2486,14 @@ static int
putstat64 (struct stat64 *ubuf, struct kstat *kbuf) putstat64 (struct stat64 *ubuf, struct kstat *kbuf)
{ {
int err; int err;
u64 hdev;
if (clear_user(ubuf, sizeof(*ubuf))) if (clear_user(ubuf, sizeof(*ubuf)))
return -EFAULT; return -EFAULT;
err = __put_user(huge_encode_dev(kbuf->dev), &ubuf->st_dev); hdev = huge_encode_dev(kbuf->dev);
err = __put_user(hdev, (u32*)&ubuf->st_dev);
err |= __put_user(hdev >> 32, ((u32*)&ubuf->st_dev) + 1);
err |= __put_user(kbuf->ino, &ubuf->__st_ino); err |= __put_user(kbuf->ino, &ubuf->__st_ino);
err |= __put_user(kbuf->ino, &ubuf->st_ino_lo); err |= __put_user(kbuf->ino, &ubuf->st_ino_lo);
err |= __put_user(kbuf->ino >> 32, &ubuf->st_ino_hi); err |= __put_user(kbuf->ino >> 32, &ubuf->st_ino_hi);
...@@ -2498,7 +2501,9 @@ putstat64 (struct stat64 *ubuf, struct kstat *kbuf) ...@@ -2498,7 +2501,9 @@ putstat64 (struct stat64 *ubuf, struct kstat *kbuf)
err |= __put_user(kbuf->nlink, &ubuf->st_nlink); err |= __put_user(kbuf->nlink, &ubuf->st_nlink);
err |= __put_user(kbuf->uid, &ubuf->st_uid); err |= __put_user(kbuf->uid, &ubuf->st_uid);
err |= __put_user(kbuf->gid, &ubuf->st_gid); err |= __put_user(kbuf->gid, &ubuf->st_gid);
err |= __put_user(huge_encode_dev(kbuf->rdev), &ubuf->st_rdev); hdev = huge_encode_dev(kbuf->rdev);
err = __put_user(hdev, (u32*)&ubuf->st_rdev);
err |= __put_user(hdev >> 32, ((u32*)&ubuf->st_rdev) + 1);
err |= __put_user(kbuf->size, &ubuf->st_size_lo); err |= __put_user(kbuf->size, &ubuf->st_size_lo);
err |= __put_user((kbuf->size >> 32), &ubuf->st_size_hi); err |= __put_user((kbuf->size >> 32), &ubuf->st_size_hi);
err |= __put_user(kbuf->atime.tv_sec, &ubuf->st_atime); err |= __put_user(kbuf->atime.tv_sec, &ubuf->st_atime);
...@@ -2724,8 +2729,8 @@ sys32_open (const char * filename, int flags, int mode) ...@@ -2724,8 +2729,8 @@ sys32_open (const char * filename, int flags, int mode)
struct epoll_event32 struct epoll_event32
{ {
u32 events; u32 events;
u64 data; u32 data[2];
} __attribute__((packed)); };
asmlinkage long asmlinkage long
sys32_epoll_ctl(int epfd, int op, int fd, struct epoll_event32 *event) sys32_epoll_ctl(int epfd, int op, int fd, struct epoll_event32 *event)
...@@ -2740,10 +2745,10 @@ sys32_epoll_ctl(int epfd, int op, int fd, struct epoll_event32 *event) ...@@ -2740,10 +2745,10 @@ sys32_epoll_ctl(int epfd, int op, int fd, struct epoll_event32 *event)
return error; return error;
__get_user(event64.events, &event->events); __get_user(event64.events, &event->events);
__get_user(data_halfword, (u32*)(&event->data)); __get_user(data_halfword, &event->data[0]);
event64.data = data_halfword; event64.data = data_halfword;
__get_user(data_halfword, ((u32*)(&event->data) + 1)); __get_user(data_halfword, &event->data[1]);
event64.data |= ((u64)data_halfword) << 32; event64.data |= (u64)data_halfword << 32;
set_fs(KERNEL_DS); set_fs(KERNEL_DS);
error = sys_epoll_ctl(epfd, op, fd, &event64); error = sys_epoll_ctl(epfd, op, fd, &event64);
...@@ -2758,8 +2763,9 @@ sys32_epoll_wait(int epfd, struct epoll_event32 *events, int maxevents, ...@@ -2758,8 +2763,9 @@ sys32_epoll_wait(int epfd, struct epoll_event32 *events, int maxevents,
{ {
struct epoll_event *events64 = NULL; struct epoll_event *events64 = NULL;
mm_segment_t old_fs = get_fs(); mm_segment_t old_fs = get_fs();
int error; int error, numevents, size;
int evt_idx; int evt_idx;
int do_free_pages = 0;
if (maxevents <= 0) { if (maxevents <= 0) {
return -EINVAL; return -EINVAL;
...@@ -2770,43 +2776,45 @@ sys32_epoll_wait(int epfd, struct epoll_event32 *events, int maxevents, ...@@ -2770,43 +2776,45 @@ sys32_epoll_wait(int epfd, struct epoll_event32 *events, int maxevents,
maxevents * sizeof(struct epoll_event32)))) maxevents * sizeof(struct epoll_event32))))
return error; return error;
/* Allocate the space needed for the intermediate copy */ /*
events64 = kmalloc(maxevents * sizeof(struct epoll_event), GFP_KERNEL); * Allocate space for the intermediate copy. If the space needed
* is large enough to cause kmalloc to fail, then try again with
* __get_free_pages.
*/
size = maxevents * sizeof(struct epoll_event);
events64 = kmalloc(size, GFP_KERNEL);
if (events64 == NULL) { if (events64 == NULL) {
events64 = (struct epoll_event *)
__get_free_pages(GFP_KERNEL, get_order(size));
if (events64 == NULL)
return -ENOMEM; return -ENOMEM;
} do_free_pages = 1;
/* Expand the 32-bit structures into the 64-bit structures */
for (evt_idx = 0; evt_idx < maxevents; evt_idx++) {
u32 data_halfword;
__get_user(events64[evt_idx].events, &events[evt_idx].events);
__get_user(data_halfword, (u32*)(&events[evt_idx].data));
events64[evt_idx].data = data_halfword;
__get_user(data_halfword, ((u32*)(&events[evt_idx].data) + 1));
events64[evt_idx].data |= ((u64)data_halfword) << 32;
} }
/* Do the system call */ /* Do the system call */
set_fs(KERNEL_DS); /* copy_to/from_user should work on kernel mem*/ set_fs(KERNEL_DS); /* copy_to/from_user should work on kernel mem*/
error = sys_epoll_wait(epfd, events64, maxevents, timeout); numevents = sys_epoll_wait(epfd, events64, maxevents, timeout);
set_fs(old_fs); set_fs(old_fs);
/* Don't modify userspace memory if we're returning an error */ /* Don't modify userspace memory if we're returning an error */
if (!error) { if (numevents > 0) {
/* Translate the 64-bit structures back into the 32-bit /* Translate the 64-bit structures back into the 32-bit
structures */ structures */
for (evt_idx = 0; evt_idx < maxevents; evt_idx++) { for (evt_idx = 0; evt_idx < numevents; evt_idx++) {
__put_user(events64[evt_idx].events, __put_user(events64[evt_idx].events,
&events[evt_idx].events); &events[evt_idx].events);
__put_user((u32)(events64[evt_idx].data), __put_user((u32)events64[evt_idx].data,
(u32*)(&events[evt_idx].data)); &events[evt_idx].data[0]);
__put_user((u32)(events64[evt_idx].data >> 32), __put_user((u32)(events64[evt_idx].data >> 32),
((u32*)(&events[evt_idx].data) + 1)); &events[evt_idx].data[1]);
} }
} }
if (do_free_pages)
free_pages((unsigned long) events64, get_order(size));
else
kfree(events64); kfree(events64);
return error; return numevents;
} }
#ifdef NOTYET /* UNTESTED FOR IA64 FROM HERE DOWN */ #ifdef NOTYET /* UNTESTED FOR IA64 FROM HERE DOWN */
......
...@@ -56,6 +56,7 @@ void (*pm_idle) (void); ...@@ -56,6 +56,7 @@ void (*pm_idle) (void);
void (*pm_power_off) (void); void (*pm_power_off) (void);
unsigned char acpi_kbd_controller_present = 1; unsigned char acpi_kbd_controller_present = 1;
unsigned char acpi_legacy_devices;
int acpi_disabled; /* XXX this shouldn't be needed---we can't boot without ACPI! */ int acpi_disabled; /* XXX this shouldn't be needed---we can't boot without ACPI! */
...@@ -380,7 +381,7 @@ acpi_numa_processor_affinity_init (struct acpi_table_processor_affinity *pa) ...@@ -380,7 +381,7 @@ acpi_numa_processor_affinity_init (struct acpi_table_processor_affinity *pa)
void __init void __init
acpi_numa_memory_affinity_init (struct acpi_table_memory_affinity *ma) acpi_numa_memory_affinity_init (struct acpi_table_memory_affinity *ma)
{ {
unsigned long paddr, size, hole_size, min_hole_size; unsigned long paddr, size;
u8 pxm; u8 pxm;
struct node_memblk_s *p, *q, *pend; struct node_memblk_s *p, *q, *pend;
...@@ -402,34 +403,6 @@ acpi_numa_memory_affinity_init (struct acpi_table_memory_affinity *ma) ...@@ -402,34 +403,6 @@ acpi_numa_memory_affinity_init (struct acpi_table_memory_affinity *ma)
if (!ma->flags.enabled) if (!ma->flags.enabled)
return; return;
/*
* When the chunk is not the first one in the node, check distance
* from the other chunks. When the hole is too huge ignore the chunk.
* This restriction should be removed when multiple chunks per node
* is supported.
*/
pend = &node_memblk[num_memblks];
min_hole_size = 0;
for (p = &node_memblk[0]; p < pend; p++) {
if (p->nid != pxm)
continue;
if (p->start_paddr < paddr)
hole_size = paddr - (p->start_paddr + p->size);
else
hole_size = p->start_paddr - (paddr + size);
if (!min_hole_size || hole_size < min_hole_size)
min_hole_size = hole_size;
}
if (min_hole_size) {
if (min_hole_size > size) {
printk(KERN_ERR "Too huge memory hole. Ignoring %ld MBytes at %lx\n",
size/(1024*1024), paddr);
return;
}
}
/* record this node in proximity bitmap */ /* record this node in proximity bitmap */
pxm_bit_set(pxm); pxm_bit_set(pxm);
...@@ -454,6 +427,12 @@ acpi_numa_arch_fixup (void) ...@@ -454,6 +427,12 @@ acpi_numa_arch_fixup (void)
{ {
int i, j, node_from, node_to; int i, j, node_from, node_to;
/* If there's no SRAT, fix the phys_id */
if (srat_num_cpus == 0) {
node_cpuid[0].phys_id = hard_smp_processor_id();
return;
}
/* calculate total number of nodes in system from PXM bitmap */ /* calculate total number of nodes in system from PXM bitmap */
numnodes = 0; /* init total nodes in system */ numnodes = 0; /* init total nodes in system */
...@@ -531,6 +510,9 @@ acpi_parse_fadt (unsigned long phys_addr, unsigned long size) ...@@ -531,6 +510,9 @@ acpi_parse_fadt (unsigned long phys_addr, unsigned long size)
if (!(fadt->iapc_boot_arch & BAF_8042_KEYBOARD_CONTROLLER)) if (!(fadt->iapc_boot_arch & BAF_8042_KEYBOARD_CONTROLLER))
acpi_kbd_controller_present = 0; acpi_kbd_controller_present = 0;
if (fadt->iapc_boot_arch & BAF_LEGACY_DEVICES)
acpi_legacy_devices = 1;
acpi_register_irq(fadt->sci_int, ACPI_ACTIVE_LOW, ACPI_LEVEL_SENSITIVE); acpi_register_irq(fadt->sci_int, ACPI_ACTIVE_LOW, ACPI_LEVEL_SENSITIVE);
return 0; return 0;
} }
...@@ -614,6 +596,12 @@ acpi_boot_init (void) ...@@ -614,6 +596,12 @@ acpi_boot_init (void)
smp_build_cpu_map(); smp_build_cpu_map();
# ifdef CONFIG_NUMA # ifdef CONFIG_NUMA
if (srat_num_cpus == 0) {
int cpu, i = 1;
for (cpu = 0; cpu < smp_boot_data.cpu_count; cpu++)
if (smp_boot_data.cpu_phys_id[cpu] != hard_smp_processor_id())
node_cpuid[i++].phys_id = smp_boot_data.cpu_phys_id[cpu];
}
build_cpu_to_node_map(); build_cpu_to_node_map();
# endif # endif
#endif #endif
......
...@@ -33,16 +33,30 @@ void foo(void) ...@@ -33,16 +33,30 @@ void foo(void)
BLANK(); BLANK();
DEFINE(IA64_TASK_BLOCKED_OFFSET,offsetof (struct task_struct, blocked));
DEFINE(IA64_TASK_CLEAR_CHILD_TID_OFFSET,offsetof (struct task_struct, clear_child_tid)); DEFINE(IA64_TASK_CLEAR_CHILD_TID_OFFSET,offsetof (struct task_struct, clear_child_tid));
DEFINE(IA64_TASK_GROUP_LEADER_OFFSET, offsetof (struct task_struct, group_leader)); DEFINE(IA64_TASK_GROUP_LEADER_OFFSET, offsetof (struct task_struct, group_leader));
DEFINE(IA64_TASK_PENDING_OFFSET,offsetof (struct task_struct, pending));
DEFINE(IA64_TASK_PID_OFFSET, offsetof (struct task_struct, pid)); DEFINE(IA64_TASK_PID_OFFSET, offsetof (struct task_struct, pid));
DEFINE(IA64_TASK_REAL_PARENT_OFFSET, offsetof (struct task_struct, real_parent)); DEFINE(IA64_TASK_REAL_PARENT_OFFSET, offsetof (struct task_struct, real_parent));
DEFINE(IA64_TASK_SIGHAND_OFFSET,offsetof (struct task_struct, sighand));
DEFINE(IA64_TASK_SIGNAL_OFFSET,offsetof (struct task_struct, signal));
DEFINE(IA64_TASK_TGID_OFFSET, offsetof (struct task_struct, tgid)); DEFINE(IA64_TASK_TGID_OFFSET, offsetof (struct task_struct, tgid));
DEFINE(IA64_TASK_THREAD_KSP_OFFSET, offsetof (struct task_struct, thread.ksp)); DEFINE(IA64_TASK_THREAD_KSP_OFFSET, offsetof (struct task_struct, thread.ksp));
DEFINE(IA64_TASK_THREAD_ON_USTACK_OFFSET, offsetof (struct task_struct, thread.on_ustack)); DEFINE(IA64_TASK_THREAD_ON_USTACK_OFFSET, offsetof (struct task_struct, thread.on_ustack));
BLANK(); BLANK();
DEFINE(IA64_SIGHAND_SIGLOCK_OFFSET,offsetof (struct sighand_struct, siglock));
BLANK();
DEFINE(IA64_SIGNAL_GROUP_STOP_COUNT_OFFSET,offsetof (struct signal_struct,
group_stop_count));
DEFINE(IA64_SIGNAL_SHARED_PENDING_OFFSET,offsetof (struct signal_struct, shared_pending));
BLANK();
DEFINE(IA64_PT_REGS_B6_OFFSET, offsetof (struct pt_regs, b6)); DEFINE(IA64_PT_REGS_B6_OFFSET, offsetof (struct pt_regs, b6));
DEFINE(IA64_PT_REGS_B7_OFFSET, offsetof (struct pt_regs, b7)); DEFINE(IA64_PT_REGS_B7_OFFSET, offsetof (struct pt_regs, b7));
DEFINE(IA64_PT_REGS_AR_CSD_OFFSET, offsetof (struct pt_regs, ar_csd)); DEFINE(IA64_PT_REGS_AR_CSD_OFFSET, offsetof (struct pt_regs, ar_csd));
...@@ -158,6 +172,10 @@ void foo(void) ...@@ -158,6 +172,10 @@ void foo(void)
BLANK(); BLANK();
DEFINE(IA64_SIGPENDING_SIGNAL_OFFSET, offsetof (struct sigpending, signal));
BLANK();
DEFINE(IA64_SIGFRAME_ARG0_OFFSET, offsetof (struct sigframe, arg0)); DEFINE(IA64_SIGFRAME_ARG0_OFFSET, offsetof (struct sigframe, arg0));
DEFINE(IA64_SIGFRAME_ARG1_OFFSET, offsetof (struct sigframe, arg1)); DEFINE(IA64_SIGFRAME_ARG1_OFFSET, offsetof (struct sigframe, arg1));
DEFINE(IA64_SIGFRAME_ARG2_OFFSET, offsetof (struct sigframe, arg2)); DEFINE(IA64_SIGFRAME_ARG2_OFFSET, offsetof (struct sigframe, arg2));
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
* Copyright (C) 2003 Hewlett-Packard Co * Copyright (C) 2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com> * David Mosberger-Tang <davidm@hpl.hp.com>
* *
* 25-Sep-03 davidm Implement fsys_rt_sigprocmask().
* 18-Feb-03 louisk Implement fsys_gettimeofday(). * 18-Feb-03 louisk Implement fsys_gettimeofday().
* 28-Feb-03 davidm Fixed several bugs in fsys_gettimeofday(). Tuned it some more, * 28-Feb-03 davidm Fixed several bugs in fsys_gettimeofday(). Tuned it some more,
* probably broke it along the way... ;-) * probably broke it along the way... ;-)
...@@ -15,6 +16,7 @@ ...@@ -15,6 +16,7 @@
#include <asm/percpu.h> #include <asm/percpu.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
#include <asm/sal.h> #include <asm/sal.h>
#include <asm/signal.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/unistd.h> #include <asm/unistd.h>
...@@ -48,8 +50,7 @@ ENTRY(fsys_ni_syscall) ...@@ -48,8 +50,7 @@ ENTRY(fsys_ni_syscall)
.body .body
mov r8=ENOSYS mov r8=ENOSYS
mov r10=-1 mov r10=-1
MCKINLEY_E9_WORKAROUND FSYS_RETURN
br.ret.sptk.many b6
END(fsys_ni_syscall) END(fsys_ni_syscall)
ENTRY(fsys_getpid) ENTRY(fsys_getpid)
...@@ -66,8 +67,7 @@ ENTRY(fsys_getpid) ...@@ -66,8 +67,7 @@ ENTRY(fsys_getpid)
;; ;;
cmp.ne p8,p0=0,r9 cmp.ne p8,p0=0,r9
(p8) br.spnt.many fsys_fallback_syscall (p8) br.spnt.many fsys_fallback_syscall
MCKINLEY_E9_WORKAROUND FSYS_RETURN
br.ret.sptk.many b6
END(fsys_getpid) END(fsys_getpid)
ENTRY(fsys_getppid) ENTRY(fsys_getppid)
...@@ -114,8 +114,7 @@ ENTRY(fsys_getppid) ...@@ -114,8 +114,7 @@ ENTRY(fsys_getppid)
mov r18=0 // i must not leak kernel bits... mov r18=0 // i must not leak kernel bits...
mov r19=0 // i must not leak kernel bits... mov r19=0 // i must not leak kernel bits...
#endif #endif
MCKINLEY_E9_WORKAROUND FSYS_RETURN
br.ret.sptk.many b6
END(fsys_getppid) END(fsys_getppid)
ENTRY(fsys_set_tid_address) ENTRY(fsys_set_tid_address)
...@@ -141,8 +140,7 @@ ENTRY(fsys_set_tid_address) ...@@ -141,8 +140,7 @@ ENTRY(fsys_set_tid_address)
;; ;;
mov r17=0 // i must not leak kernel bits... mov r17=0 // i must not leak kernel bits...
mov r18=0 // i must not leak kernel bits... mov r18=0 // i must not leak kernel bits...
MCKINLEY_E9_WORKAROUND FSYS_RETURN
br.ret.sptk.many b6
END(fsys_set_tid_address) END(fsys_set_tid_address)
/* /*
...@@ -199,7 +197,7 @@ ENTRY(fsys_gettimeofday) ...@@ -199,7 +197,7 @@ ENTRY(fsys_gettimeofday)
adds r10=IA64_CPUINFO_ITM_DELTA_OFFSET, r10 adds r10=IA64_CPUINFO_ITM_DELTA_OFFSET, r10
(p7) tnat.nz p6,p0=r33 (p7) tnat.nz p6,p0=r33
(p6) br.cond.spnt.few .fail (p6) br.cond.spnt.few .fail_einval
adds r8=IA64_CPUINFO_NSEC_PER_CYC_OFFSET, r3 adds r8=IA64_CPUINFO_NSEC_PER_CYC_OFFSET, r3
movl r24=2361183241434822607 // for division hack (only for / 1000) movl r24=2361183241434822607 // for division hack (only for / 1000)
...@@ -225,8 +223,8 @@ ENTRY(fsys_gettimeofday) ...@@ -225,8 +223,8 @@ ENTRY(fsys_gettimeofday)
* to store the result. That's OK as long as the stores are also * to store the result. That's OK as long as the stores are also
* protect by EX(). * protect by EX().
*/ */
EX(.fail, probe.w.fault r32, 3) // this must come _after_ NaT-check EX(.fail_efault, probe.w.fault r32, 3) // this must come _after_ NaT-check
EX(.fail, probe.w.fault r10, 3) // this must come _after_ NaT-check EX(.fail_efault, probe.w.fault r10, 3) // this must come _after_ NaT-check
nop 0 nop 0
ldf8 f10=[r8] // f10 <- local_cpu_data->nsec_per_cyc value ldf8 f10=[r8] // f10 <- local_cpu_data->nsec_per_cyc value
...@@ -311,14 +309,13 @@ EX(.fail, probe.w.fault r10, 3) // this must come _after_ NaT-check ...@@ -311,14 +309,13 @@ EX(.fail, probe.w.fault r10, 3) // this must come _after_ NaT-check
(p7) br.spnt.many 1b (p7) br.spnt.many 1b
// finally: r2 = sec, r3 = usec // finally: r2 = sec, r3 = usec
EX(.fail, st8 [r32]=r2) EX(.fail_efault, st8 [r32]=r2)
adds r9=8, r32 adds r9=8, r32
mov r8=r0 // success mov r8=r0 // success
;; ;;
EX(.fail, st8 [r9]=r3) // store them in the timeval struct EX(.fail_efault, st8 [r9]=r3) // store them in the timeval struct
mov r10=0 mov r10=0
MCKINLEY_E9_WORKAROUND FSYS_RETURN
br.ret.sptk.many b6 // return to caller
/* /*
* Note: We are NOT clearing the scratch registers here. Since the only things * Note: We are NOT clearing the scratch registers here. Since the only things
* in those registers are time-related variables and some addresses (which * in those registers are time-related variables and some addresses (which
...@@ -326,12 +323,183 @@ EX(.fail, st8 [r9]=r3) // store them in the timeval struct ...@@ -326,12 +323,183 @@ EX(.fail, st8 [r9]=r3) // store them in the timeval struct
* and we should be fine. * and we should be fine.
*/ */
.fail: adds r8=EINVAL, r0 // r8 = EINVAL .fail_einval:
adds r10=-1, r0 // r10 = -1 mov r8=EINVAL // r8 = EINVAL
MCKINLEY_E9_WORKAROUND mov r10=-1 // r10 = -1
br.ret.spnt.many b6 // return with r8 set to EINVAL FSYS_RETURN
.fail_efault:
mov r8=EFAULT // r8 = EFAULT
mov r10=-1 // r10 = -1
FSYS_RETURN
END(fsys_gettimeofday) END(fsys_gettimeofday)
/*
* long fsys_rt_sigprocmask (int how, sigset_t *set, sigset_t *oset, size_t sigsetsize).
*/
#if _NSIG_WORDS != 1
# error Sorry, fsys_rt_sigprocmask() needs to be updated for _NSIG_WORDS != 1.
#endif
ENTRY(fsys_rt_sigprocmask)
.prologue
.altrp b6
.body
mf // ensure reading of current->blocked is ordered
add r2=IA64_TASK_BLOCKED_OFFSET,r16
add r9=TI_FLAGS+IA64_TASK_SIZE,r16
;;
/*
* Since we're only reading a single word, we can do it
* atomically without acquiring current->sighand->siglock. To
* be on the safe side, we need a fully-ordered load, though:
*/
ld8.acq r3=[r2] // read/prefetch current->blocked
ld4 r9=[r9]
add r31=IA64_TASK_SIGHAND_OFFSET,r16
;;
#ifdef CONFIG_SMP
ld8 r31=[r31] // r31 <- current->sighand
#endif
and r9=TIF_ALLWORK_MASK,r9
tnat.nz p6,p0=r32
;;
cmp.ne p7,p0=0,r9
tnat.nz.or p6,p0=r35
tnat.nz p8,p0=r34
;;
cmp.ne p15,p0=r0,r34 // oset != NULL?
cmp.ne.or p6,p0=_NSIG_WORDS*8,r35
tnat.nz.or p8,p0=r33
(p6) br.spnt.few .fail_einval // fail with EINVAL
(p7) br.spnt.many fsys_fallback_syscall // got pending kernel work...
(p8) br.spnt.few .fail_efault // fail with EFAULT
;;
cmp.eq p6,p7=r0,r33 // set == NULL?
add r31=IA64_SIGHAND_SIGLOCK_OFFSET,r31 // r31 <- current->sighand->siglock
(p6) br.dpnt.many .store_mask // -> short-circuit to just reading the signal mask
/* Argh, we actually have to do some work and _update_ the signal mask: */
EX(.fail_efault, probe.r.fault r33, 3) // verify user has read-access to *set
EX(.fail_efault, ld8 r14=[r33]) // r14 <- *set
mov r17=(1 << (SIGKILL - 1)) | (1 << (SIGSTOP - 1))
;;
rsm psr.i // mask interrupt delivery
mov ar.ccv=0
andcm r14=r14,r17 // filter out SIGKILL & SIGSTOP
#ifdef CONFIG_SMP
mov r17=1
;;
cmpxchg4.acq r18=[r31],r17,ar.ccv // try to acquire the lock
mov r8=EINVAL // default to EINVAL
;;
ld8 r3=[r2] // re-read current->blocked now that we hold the lock
cmp4.ne p6,p0=r18,r0
(p6) br.cond.spnt.many .lock_contention
;;
#else
ld8 r3=[r2] // re-read current->blocked now that we hold the lock
mov r8=EINVAL // default to EINVAL
#endif
add r18=IA64_TASK_PENDING_OFFSET+IA64_SIGPENDING_SIGNAL_OFFSET,r16
add r19=IA64_TASK_SIGNAL_OFFSET,r16
cmp4.eq p6,p0=SIG_BLOCK,r32
;;
ld8 r19=[r19] // r19 <- current->signal
cmp4.eq p7,p0=SIG_UNBLOCK,r32
cmp4.eq p8,p0=SIG_SETMASK,r32
;;
ld8 r18=[r18] // r18 <- current->pending.signal
.pred.rel.mutex p6,p7,p8
(p6) or r3=r3,r14 // SIG_BLOCK
(p7) andcm r3=r3,r14 // SIG_UNBLOCK
(p8) mov r3=r14 // SIG_SETMASK
(p6) mov r8=0 // clear error code
// recalc_sigpending()
add r17=IA64_SIGNAL_GROUP_STOP_COUNT_OFFSET,r19
add r19=IA64_SIGNAL_SHARED_PENDING_OFFSET+IA64_SIGPENDING_SIGNAL_OFFSET,r19
;;
ld4 r17=[r17] // r17 <- current->signal->group_stop_count
(p7) mov r8=0 // clear error code
ld8 r19=[r19] // r19 <- current->signal->shared_pending
;;
cmp4.gt p6,p7=r17,r0 // p6/p7 <- (current->signal->group_stop_count > 0)?
(p8) mov r8=0 // clear error code
or r18=r18,r19 // r18 <- current->pending | current->signal->shared_pending
;;
// r18 <- (current->pending | current->signal->shared_pending) & ~current->blocked:
andcm r18=r18,r3
add r9=TI_FLAGS+IA64_TASK_SIZE,r16
;;
(p7) cmp.ne.or.andcm p6,p7=r18,r0 // p6/p7 <- signal pending
mov r19=0 // i must not leak kernel bits...
(p6) br.cond.dpnt.many .sig_pending
;;
1: ld4 r17=[r9] // r17 <- current->thread_info->flags
;;
mov ar.ccv=r17
and r18=~_TIF_SIGPENDING,r17 // r18 <- r17 & ~(1 << TIF_SIGPENDING)
;;
st8 [r2]=r3 // update current->blocked with new mask
cmpxchg4.acq r14=[r9],r18,ar.ccv // current->thread_info->flags <- r18
;;
cmp.ne p6,p0=r17,r14 // update failed?
(p6) br.cond.spnt.few 1b // yes -> retry
#ifdef CONFIG_SMP
st4.rel [r31]=r0 // release the lock
#endif
ssm psr.i
cmp.ne p9,p0=r8,r0 // check for bad HOW value
;;
srlz.d // ensure psr.i is set again
mov r18=0 // i must not leak kernel bits...
(p9) br.spnt.few .fail_einval // bail out for bad HOW value
.store_mask:
EX(.fail_efault, (p15) probe.w.fault r34, 3) // verify user has write-access to *oset
EX(.fail_efault, (p15) st8 [r34]=r3)
mov r2=0 // i must not leak kernel bits...
mov r3=0 // i must not leak kernel bits...
mov r8=0 // return 0
mov r9=0 // i must not leak kernel bits...
mov r14=0 // i must not leak kernel bits...
mov r17=0 // i must not leak kernel bits...
mov r31=0 // i must not leak kernel bits...
FSYS_RETURN
.sig_pending:
#ifdef CONFIG_SMP
st4.rel [r31]=r0 // release the lock
#endif
ssm psr.i
;;
srlz.d
br.sptk.many fsys_fallback_syscall // with signal pending, do the heavy-weight syscall
#ifdef CONFIG_SMP
.lock_contention:
/* Rather than spinning here, fall back on doing a heavy-weight syscall. */
ssm psr.i
;;
srlz.d
br.sptk.many fsys_fallback_syscall
#endif
END(fsys_rt_sigprocmask)
ENTRY(fsys_fallback_syscall) ENTRY(fsys_fallback_syscall)
.prologue .prologue
.altrp b6 .altrp b6
...@@ -600,7 +768,7 @@ fsyscall_table: ...@@ -600,7 +768,7 @@ fsyscall_table:
data8 0 // sigaltstack data8 0 // sigaltstack
data8 0 // rt_sigaction data8 0 // rt_sigaction
data8 0 // rt_sigpending data8 0 // rt_sigpending
data8 0 // rt_sigprocmask data8 fsys_rt_sigprocmask // rt_sigprocmask
data8 0 // rt_sigqueueinfo // 1180 data8 0 // rt_sigqueueinfo // 1180
data8 0 // rt_sigreturn data8 0 // rt_sigreturn
data8 0 // rt_sigsuspend data8 0 // rt_sigsuspend
......
...@@ -118,8 +118,7 @@ GLOBAL_ENTRY(__kernel_syscall_via_epc) ...@@ -118,8 +118,7 @@ GLOBAL_ENTRY(__kernel_syscall_via_epc)
mov r10=-1 mov r10=-1
mov r8=ENOSYS mov r8=ENOSYS
MCKINLEY_E9_WORKAROUND FSYS_RETURN
br.ret.sptk.many b6
END(__kernel_syscall_via_epc) END(__kernel_syscall_via_epc)
# define ARG0_OFF (16 + IA64_SIGFRAME_ARG0_OFFSET) # define ARG0_OFF (16 + IA64_SIGFRAME_ARG0_OFFSET)
......
...@@ -797,6 +797,25 @@ GLOBAL_ENTRY(ia64_switch_mode_virt) ...@@ -797,6 +797,25 @@ GLOBAL_ENTRY(ia64_switch_mode_virt)
br.ret.sptk.many rp br.ret.sptk.many rp
END(ia64_switch_mode_virt) END(ia64_switch_mode_virt)
GLOBAL_ENTRY(ia64_delay_loop)
.prologue
{ nop 0 // work around GAS unwind info generation bug...
.save ar.lc,r2
mov r2=ar.lc
.body
;;
mov ar.lc=r32
}
;;
// force loop to be 32-byte aligned (GAS bug means we cannot use .align
// inside function body without corrupting unwind info).
{ nop 0 }
1: br.cloop.sptk.few 1b
;;
mov ar.lc=r2
br.ret.sptk.many rp
END(ia64_delay_loop)
#ifdef CONFIG_IA64_BRL_EMU #ifdef CONFIG_IA64_BRL_EMU
/* /*
......
...@@ -81,8 +81,6 @@ u64 ia64_init_stack[KERNEL_STACK_SIZE/8] __attribute__((aligned(16))); ...@@ -81,8 +81,6 @@ u64 ia64_init_stack[KERNEL_STACK_SIZE/8] __attribute__((aligned(16)));
u64 ia64_mca_sal_data_area[1356]; u64 ia64_mca_sal_data_area[1356];
u64 ia64_tlb_functional; u64 ia64_tlb_functional;
u64 ia64_os_mca_recovery_successful; u64 ia64_os_mca_recovery_successful;
/* TODO: need to assign min-state structure to UC memory */
u64 ia64_mca_min_state_save_info[MIN_STATE_AREA_SIZE] __attribute__((aligned(512)));
static void ia64_mca_wakeup_ipi_wait(void); static void ia64_mca_wakeup_ipi_wait(void);
static void ia64_mca_wakeup(int cpu); static void ia64_mca_wakeup(int cpu);
static void ia64_mca_wakeup_all(void); static void ia64_mca_wakeup_all(void);
...@@ -465,26 +463,6 @@ ia64_mca_register_cpev (int cpev) ...@@ -465,26 +463,6 @@ ia64_mca_register_cpev (int cpev)
#endif /* PLATFORM_MCA_HANDLERS */ #endif /* PLATFORM_MCA_HANDLERS */
/*
* routine to process and prepare to dump min_state_save
* information for debugging purposes.
*/
void
ia64_process_min_state_save (pal_min_state_area_t *pmss)
{
int i, max = MIN_STATE_AREA_SIZE;
u64 *tpmss_ptr = (u64 *)pmss;
u64 *return_min_state_ptr = ia64_mca_min_state_save_info;
for (i=0;i<max;i++) {
/* copy min-state register info for eventual return to PAL */
*return_min_state_ptr++ = *tpmss_ptr;
tpmss_ptr++; /* skip to next entry */
}
}
/* /*
* ia64_mca_cmc_vector_setup * ia64_mca_cmc_vector_setup
* *
...@@ -828,7 +806,7 @@ ia64_mca_wakeup_ipi_wait(void) ...@@ -828,7 +806,7 @@ ia64_mca_wakeup_ipi_wait(void)
irr = ia64_getreg(_IA64_REG_CR_IRR3); irr = ia64_getreg(_IA64_REG_CR_IRR3);
break; break;
} }
} while (!(irr & (1 << irr_bit))) ; } while (!(irr & (1UL << irr_bit))) ;
} }
/* /*
...@@ -961,9 +939,8 @@ ia64_return_to_sal_check(void) ...@@ -961,9 +939,8 @@ ia64_return_to_sal_check(void)
/* Default = tell SAL to return to same context */ /* Default = tell SAL to return to same context */
ia64_os_to_sal_handoff_state.imots_context = IA64_MCA_SAME_CONTEXT; ia64_os_to_sal_handoff_state.imots_context = IA64_MCA_SAME_CONTEXT;
/* Register pointer to new min state values */
ia64_os_to_sal_handoff_state.imots_new_min_state = ia64_os_to_sal_handoff_state.imots_new_min_state =
ia64_mca_min_state_save_info; (u64 *)ia64_sal_to_os_handoff_state.pal_min_state;
} }
/* /*
...@@ -2154,9 +2131,6 @@ ia64_log_proc_dev_err_info_print (sal_log_processor_info_t *slpi, ...@@ -2154,9 +2131,6 @@ ia64_log_proc_dev_err_info_print (sal_log_processor_info_t *slpi,
if (slpi->valid.psi_static_struct) { if (slpi->valid.psi_static_struct) {
spsi = (sal_processor_static_info_t *)p_data; spsi = (sal_processor_static_info_t *)p_data;
/* copy interrupted context PAL min-state info */
ia64_process_min_state_save(&spsi->min_state_area);
/* Print branch register contents if valid */ /* Print branch register contents if valid */
if (spsi->valid.br) if (spsi->valid.br)
ia64_log_processor_regs_print(spsi->br, 8, "Branch", "br", ia64_log_processor_regs_print(spsi->br, 8, "Branch", "br",
......
This diff is collapsed.
...@@ -130,9 +130,11 @@ ia64_patch_mckinley_e9 (unsigned long start, unsigned long end) ...@@ -130,9 +130,11 @@ ia64_patch_mckinley_e9 (unsigned long start, unsigned long end)
while (offp < (s32 *) end) { while (offp < (s32 *) end) {
wp = (u64 *) ia64_imva((char *) offp + *offp); wp = (u64 *) ia64_imva((char *) offp + *offp);
wp[0] = 0x0000000100000000; wp[0] = 0x0000000100000000; /* nop.m 0; nop.i 0; nop.i 0 */
wp[1] = 0x0004000000000200; wp[1] = 0x0004000000000200;
ia64_fc(wp); wp[2] = 0x0000000100000011; /* nop.m 0; nop.i 0; br.ret.sptk.many b6 */
wp[3] = 0x0084006880000200;
ia64_fc(wp); ia64_fc(wp + 2);
++offp; ++offp;
} }
ia64_sync_i(); ia64_sync_i();
......
This diff is collapsed.
...@@ -81,6 +81,8 @@ pfm_ita_pmc_check(struct task_struct *task, pfm_context_t *ctx, unsigned int cnu ...@@ -81,6 +81,8 @@ pfm_ita_pmc_check(struct task_struct *task, pfm_context_t *ctx, unsigned int cnu
*/ */
if (cnum == 13 && ((*val & 0x1) == 0UL) && ctx->ctx_fl_using_dbreg == 0) { if (cnum == 13 && ((*val & 0x1) == 0UL) && ctx->ctx_fl_using_dbreg == 0) {
DPRINT(("pmc[%d]=0x%lx has active pmc13.ta cleared, clearing ibr\n", cnum, *val));
/* don't mix debug with perfmon */ /* don't mix debug with perfmon */
if (task && (task->thread.flags & IA64_THREAD_DBG_VALID) != 0) return -EINVAL; if (task && (task->thread.flags & IA64_THREAD_DBG_VALID) != 0) return -EINVAL;
...@@ -98,6 +100,8 @@ pfm_ita_pmc_check(struct task_struct *task, pfm_context_t *ctx, unsigned int cnu ...@@ -98,6 +100,8 @@ pfm_ita_pmc_check(struct task_struct *task, pfm_context_t *ctx, unsigned int cnu
*/ */
if (cnum == 11 && ((*val >> 28)& 0x1) == 0 && ctx->ctx_fl_using_dbreg == 0) { if (cnum == 11 && ((*val >> 28)& 0x1) == 0 && ctx->ctx_fl_using_dbreg == 0) {
DPRINT(("pmc[%d]=0x%lx has active pmc11.pt cleared, clearing dbr\n", cnum, *val));
/* don't mix debug with perfmon */ /* don't mix debug with perfmon */
if (task && (task->thread.flags & IA64_THREAD_DBG_VALID) != 0) return -EINVAL; if (task && (task->thread.flags & IA64_THREAD_DBG_VALID) != 0) return -EINVAL;
......
...@@ -109,10 +109,20 @@ pfm_mck_pmc_check(struct task_struct *task, pfm_context_t *ctx, unsigned int cnu ...@@ -109,10 +109,20 @@ pfm_mck_pmc_check(struct task_struct *task, pfm_context_t *ctx, unsigned int cnu
if (ctx == NULL) return -EINVAL; if (ctx == NULL) return -EINVAL;
/* /*
* we must clear the debug registers if any pmc13.ena_dbrpX bit is enabled * we must clear the debug registers if pmc13 has a value which enable
* before they are written (fl_using_dbreg==0) to avoid picking up stale information. * memory pipeline event constraints. In this case we need to clear the
* the debug registers if they have not yet been accessed. This is required
* to avoid picking stale state.
* PMC13 is "active" if:
* one of the pmc13.cfg_dbrpXX field is different from 0x3
* AND
* at the corresponding pmc13.ena_dbrpXX is set.
*
* For now, we just check on cfg_dbrXX != 0x3.
*/ */
if (cnum == 13 && (*val & (0xfUL << 45)) && ctx->ctx_fl_using_dbreg == 0) { if (cnum == 13 && ((*val & 0x18181818UL) != 0x18181818UL) && ctx->ctx_fl_using_dbreg == 0) {
DPRINT(("pmc[%d]=0x%lx has active pmc13 settings, clearing dbr\n", cnum, *val));
/* don't mix debug with perfmon */ /* don't mix debug with perfmon */
if (task && (task->thread.flags & IA64_THREAD_DBG_VALID) != 0) return -EINVAL; if (task && (task->thread.flags & IA64_THREAD_DBG_VALID) != 0) return -EINVAL;
...@@ -128,7 +138,9 @@ pfm_mck_pmc_check(struct task_struct *task, pfm_context_t *ctx, unsigned int cnu ...@@ -128,7 +138,9 @@ pfm_mck_pmc_check(struct task_struct *task, pfm_context_t *ctx, unsigned int cnu
* we must clear the (instruction) debug registers if any pmc14.ibrpX bit is enabled * we must clear the (instruction) debug registers if any pmc14.ibrpX bit is enabled
* before they are (fl_using_dbreg==0) to avoid picking up stale information. * before they are (fl_using_dbreg==0) to avoid picking up stale information.
*/ */
if (cnum == 14 && ((*val & 0x2222) != 0x2222) && ctx->ctx_fl_using_dbreg == 0) { if (cnum == 14 && ((*val & 0x2222UL) != 0x2222UL) && ctx->ctx_fl_using_dbreg == 0) {
DPRINT(("pmc[%d]=0x%lx has active pmc14 settings, clearing ibr\n", cnum, *val));
/* don't mix debug with perfmon */ /* don't mix debug with perfmon */
if (task && (task->thread.flags & IA64_THREAD_DBG_VALID) != 0) return -EINVAL; if (task && (task->thread.flags & IA64_THREAD_DBG_VALID) != 0) return -EINVAL;
...@@ -170,7 +182,7 @@ pfm_mck_pmc_check(struct task_struct *task, pfm_context_t *ctx, unsigned int cnu ...@@ -170,7 +182,7 @@ pfm_mck_pmc_check(struct task_struct *task, pfm_context_t *ctx, unsigned int cnu
&& ((((val14>>1) & 0x3) == 0x2 || ((val14>>1) & 0x3) == 0x0) && ((((val14>>1) & 0x3) == 0x2 || ((val14>>1) & 0x3) == 0x0)
||(((val14>>4) & 0x3) == 0x2 || ((val14>>4) & 0x3) == 0x0)); ||(((val14>>4) & 0x3) == 0x2 || ((val14>>4) & 0x3) == 0x0));
if (ret) printk("perfmon: failure check_case1\n"); if (ret) DPRINT((KERN_DEBUG "perfmon: failure check_case1\n"));
} }
return ret ? -EINVAL : 0; return ret ? -EINVAL : 0;
......
...@@ -30,6 +30,8 @@ ...@@ -30,6 +30,8 @@
#include <linux/string.h> #include <linux/string.h>
#include <linux/threads.h> #include <linux/threads.h>
#include <linux/tty.h> #include <linux/tty.h>
#include <linux/serial.h>
#include <linux/serial_core.h>
#include <linux/efi.h> #include <linux/efi.h>
#include <linux/initrd.h> #include <linux/initrd.h>
...@@ -43,6 +45,7 @@ ...@@ -43,6 +45,7 @@
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/sal.h> #include <asm/sal.h>
#include <asm/sections.h> #include <asm/sections.h>
#include <asm/serial.h>
#include <asm/smp.h> #include <asm/smp.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/unistd.h> #include <asm/unistd.h>
...@@ -101,7 +104,7 @@ int ...@@ -101,7 +104,7 @@ int
filter_rsvd_memory (unsigned long start, unsigned long end, void *arg) filter_rsvd_memory (unsigned long start, unsigned long end, void *arg)
{ {
unsigned long range_start, range_end, prev_start; unsigned long range_start, range_end, prev_start;
void (*func)(unsigned long, unsigned long); void (*func)(unsigned long, unsigned long, int);
int i; int i;
#if IGNORE_PFN0 #if IGNORE_PFN0
...@@ -122,11 +125,7 @@ filter_rsvd_memory (unsigned long start, unsigned long end, void *arg) ...@@ -122,11 +125,7 @@ filter_rsvd_memory (unsigned long start, unsigned long end, void *arg)
range_end = min(end, rsvd_region[i].start); range_end = min(end, rsvd_region[i].start);
if (range_start < range_end) if (range_start < range_end)
#ifdef CONFIG_DISCONTIGMEM call_pernode_memory(__pa(range_start), range_end - range_start, func);
call_pernode_memory(__pa(range_start), __pa(range_end), func);
#else
(*func)(__pa(range_start), range_end - range_start);
#endif
/* nothing more available in this segment */ /* nothing more available in this segment */
if (range_end == end) return 0; if (range_end == end) return 0;
...@@ -225,6 +224,25 @@ find_initrd (void) ...@@ -225,6 +224,25 @@ find_initrd (void)
#endif #endif
} }
#ifdef CONFIG_SERIAL_8250_CONSOLE
static void __init
setup_serial_legacy (void)
{
struct uart_port port;
unsigned int i, iobase[] = {0x3f8, 0x2f8};
printk(KERN_INFO "Registering legacy COM ports for serial console\n");
memset(&port, 0, sizeof(port));
port.iotype = SERIAL_IO_PORT;
port.uartclk = BASE_BAUD * 16;
for (i = 0; i < ARRAY_SIZE(iobase); i++) {
port.line = i;
port.iobase = iobase[i];
early_serial_setup(&port);
}
}
#endif
void __init void __init
setup_arch (char **cmdline_p) setup_arch (char **cmdline_p)
{ {
...@@ -239,7 +257,6 @@ setup_arch (char **cmdline_p) ...@@ -239,7 +257,6 @@ setup_arch (char **cmdline_p)
strlcpy(saved_command_line, *cmdline_p, sizeof(saved_command_line)); strlcpy(saved_command_line, *cmdline_p, sizeof(saved_command_line));
efi_init(); efi_init();
find_memory();
#ifdef CONFIG_ACPI_BOOT #ifdef CONFIG_ACPI_BOOT
/* Initialize the ACPI boot-time table parser */ /* Initialize the ACPI boot-time table parser */
...@@ -253,6 +270,8 @@ setup_arch (char **cmdline_p) ...@@ -253,6 +270,8 @@ setup_arch (char **cmdline_p)
# endif # endif
#endif /* CONFIG_APCI_BOOT */ #endif /* CONFIG_APCI_BOOT */
find_memory();
/* process SAL system table: */ /* process SAL system table: */
ia64_sal_init(efi.sal_systab); ia64_sal_init(efi.sal_systab);
...@@ -297,11 +316,22 @@ setup_arch (char **cmdline_p) ...@@ -297,11 +316,22 @@ setup_arch (char **cmdline_p)
#ifdef CONFIG_SERIAL_8250_HCDP #ifdef CONFIG_SERIAL_8250_HCDP
if (efi.hcdp) { if (efi.hcdp) {
void setup_serial_hcdp(void *); void setup_serial_hcdp(void *);
/* Setup the serial ports described by HCDP */
setup_serial_hcdp(efi.hcdp); setup_serial_hcdp(efi.hcdp);
} }
#endif #endif
#ifdef CONFIG_SERIAL_8250_CONSOLE
/*
* Without HCDP, we won't discover any serial ports until the serial driver looks
* in the ACPI namespace. If ACPI claims there are some legacy devices, register
* the legacy COM ports so serial console works earlier. This is slightly dangerous
* because we don't *really* know whether there's anything there, but we hope that
* all new boxes will implement HCDP.
*/
extern unsigned char acpi_legacy_devices;
if (!efi.hcdp && acpi_legacy_devices)
setup_serial_legacy();
#endif
#ifdef CONFIG_VT #ifdef CONFIG_VT
# if defined(CONFIG_DUMMY_CONSOLE) # if defined(CONFIG_DUMMY_CONSOLE)
conswitchp = &dummy_con; conswitchp = &dummy_con;
...@@ -544,28 +574,7 @@ cpu_init (void) ...@@ -544,28 +574,7 @@ cpu_init (void)
struct cpuinfo_ia64 *cpu_info; struct cpuinfo_ia64 *cpu_info;
void *cpu_data; void *cpu_data;
#ifdef CONFIG_SMP cpu_data = per_cpu_init();
int cpu;
/*
* get_free_pages() cannot be used before cpu_init() done. BSP allocates
* "NR_CPUS" pages for all CPUs to avoid that AP calls get_zeroed_page().
*/
if (smp_processor_id() == 0) {
cpu_data = __alloc_bootmem(PERCPU_PAGE_SIZE * NR_CPUS, PERCPU_PAGE_SIZE,
__pa(MAX_DMA_ADDRESS));
for (cpu = 0; cpu < NR_CPUS; cpu++) {
memcpy(cpu_data, __phys_per_cpu_start, __per_cpu_end - __per_cpu_start);
__per_cpu_offset[cpu] = (char *) cpu_data - __per_cpu_start;
cpu_data += PERCPU_PAGE_SIZE;
per_cpu(local_per_cpu_offset, cpu) = __per_cpu_offset[cpu];
}
}
cpu_data = __per_cpu_start + __per_cpu_offset[smp_processor_id()];
#else /* !CONFIG_SMP */
cpu_data = __phys_per_cpu_start;
#endif /* !CONFIG_SMP */
get_max_cacheline_size(); get_max_cacheline_size();
...@@ -576,9 +585,6 @@ cpu_init (void) ...@@ -576,9 +585,6 @@ cpu_init (void)
* accessing cpu_data() through the canonical per-CPU address. * accessing cpu_data() through the canonical per-CPU address.
*/ */
cpu_info = cpu_data + ((char *) &__ia64_per_cpu_var(cpu_info) - __per_cpu_start); cpu_info = cpu_data + ((char *) &__ia64_per_cpu_var(cpu_info) - __per_cpu_start);
#ifdef CONFIG_NUMA
cpu_info->node_data = get_node_data_ptr();
#endif
identify_cpu(cpu_info); identify_cpu(cpu_info);
#ifdef CONFIG_MCKINLEY #ifdef CONFIG_MCKINLEY
......
...@@ -65,8 +65,12 @@ itc_update (long delta_nsec) ...@@ -65,8 +65,12 @@ itc_update (long delta_nsec)
} }
/* /*
* Return the number of nano-seconds that elapsed since the last update to jiffy. The * Return the number of nano-seconds that elapsed since the last
* xtime_lock must be at least read-locked when calling this routine. * update to jiffy. It is quite possible that the timer interrupt
* will interrupt this and result in a race for any of jiffies,
* wall_jiffies or itm_next. Thus, the xtime_lock must be at least
* read synchronised when calling this routine (see do_gettimeofday()
* below for an example).
*/ */
unsigned long unsigned long
itc_get_offset (void) itc_get_offset (void)
...@@ -77,11 +81,6 @@ itc_get_offset (void) ...@@ -77,11 +81,6 @@ itc_get_offset (void)
last_tick = (cpu_data(TIME_KEEPER_ID)->itm_next last_tick = (cpu_data(TIME_KEEPER_ID)->itm_next
- (lost + 1)*cpu_data(TIME_KEEPER_ID)->itm_delta); - (lost + 1)*cpu_data(TIME_KEEPER_ID)->itm_delta);
if (unlikely((long) (now - last_tick) < 0)) {
printk(KERN_ERR "CPU %d: now < last_tick (now=0x%lx,last_tick=0x%lx)!\n",
smp_processor_id(), now, last_tick);
return last_nsec_offset;
}
elapsed_cycles = now - last_tick; elapsed_cycles = now - last_tick;
return (elapsed_cycles*local_cpu_data->nsec_per_cyc) >> IA64_NSEC_PER_CYC_SHIFT; return (elapsed_cycles*local_cpu_data->nsec_per_cyc) >> IA64_NSEC_PER_CYC_SHIFT;
} }
......
...@@ -25,6 +25,10 @@ ...@@ -25,6 +25,10 @@
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/sections.h> #include <asm/sections.h>
#ifdef CONFIG_VIRTUAL_MEM_MAP
static unsigned long num_dma_physpages;
#endif
/** /**
* show_mem - display a memory statistics summary * show_mem - display a memory statistics summary
* *
...@@ -161,3 +165,133 @@ find_memory (void) ...@@ -161,3 +165,133 @@ find_memory (void)
find_initrd(); find_initrd();
} }
#ifdef CONFIG_SMP
/**
* per_cpu_init - setup per-cpu variables
*
* Allocate and setup per-cpu data areas.
*/
void *
per_cpu_init (void)
{
void *cpu_data;
int cpu;
/*
* get_free_pages() cannot be used before cpu_init() done. BSP
* allocates "NR_CPUS" pages for all CPUs to avoid that AP calls
* get_zeroed_page().
*/
if (smp_processor_id() == 0) {
cpu_data = __alloc_bootmem(PERCPU_PAGE_SIZE * NR_CPUS,
PERCPU_PAGE_SIZE, __pa(MAX_DMA_ADDRESS));
for (cpu = 0; cpu < NR_CPUS; cpu++) {
memcpy(cpu_data, __phys_per_cpu_start, __per_cpu_end - __per_cpu_start);
__per_cpu_offset[cpu] = (char *) cpu_data - __per_cpu_start;
cpu_data += PERCPU_PAGE_SIZE;
per_cpu(local_per_cpu_offset, cpu) = __per_cpu_offset[cpu];
}
}
return __per_cpu_start + __per_cpu_offset[smp_processor_id()];
}
#endif /* CONFIG_SMP */
static int
count_pages (u64 start, u64 end, void *arg)
{
unsigned long *count = arg;
*count += (end - start) >> PAGE_SHIFT;
return 0;
}
#ifdef CONFIG_VIRTUAL_MEM_MAP
static int
count_dma_pages (u64 start, u64 end, void *arg)
{
unsigned long *count = arg;
if (end <= MAX_DMA_ADDRESS)
*count += (end - start) >> PAGE_SHIFT;
return 0;
}
#endif
/*
* Set up the page tables.
*/
void
paging_init (void)
{
unsigned long max_dma;
unsigned long zones_size[MAX_NR_ZONES];
#ifdef CONFIG_VIRTUAL_MEM_MAP
unsigned long zholes_size[MAX_NR_ZONES];
unsigned long max_gap;
#endif
/* initialize mem_map[] */
memset(zones_size, 0, sizeof(zones_size));
num_physpages = 0;
efi_memmap_walk(count_pages, &num_physpages);
max_dma = virt_to_phys((void *) MAX_DMA_ADDRESS) >> PAGE_SHIFT;
#ifdef CONFIG_VIRTUAL_MEM_MAP
memset(zholes_size, 0, sizeof(zholes_size));
num_dma_physpages = 0;
efi_memmap_walk(count_dma_pages, &num_dma_physpages);
if (max_low_pfn < max_dma) {
zones_size[ZONE_DMA] = max_low_pfn;
zholes_size[ZONE_DMA] = max_low_pfn - num_dma_physpages;
} else {
zones_size[ZONE_DMA] = max_dma;
zholes_size[ZONE_DMA] = max_dma - num_dma_physpages;
if (num_physpages > num_dma_physpages) {
zones_size[ZONE_NORMAL] = max_low_pfn - max_dma;
zholes_size[ZONE_NORMAL] =
((max_low_pfn - max_dma) -
(num_physpages - num_dma_physpages));
}
}
max_gap = 0;
efi_memmap_walk(find_largest_hole, (u64 *)&max_gap);
if (max_gap < LARGE_GAP) {
vmem_map = (struct page *) 0;
free_area_init_node(0, &contig_page_data, NULL, zones_size, 0,
zholes_size);
mem_map = contig_page_data.node_mem_map;
} else {
unsigned long map_size;
/* allocate virtual_mem_map */
map_size = PAGE_ALIGN(max_low_pfn * sizeof(struct page));
vmalloc_end -= map_size;
vmem_map = (struct page *) vmalloc_end;
efi_memmap_walk(create_mem_map_page_table, 0);
free_area_init_node(0, &contig_page_data, vmem_map, zones_size,
0, zholes_size);
mem_map = contig_page_data.node_mem_map;
printk("Virtual mem_map starts at 0x%p\n", mem_map);
}
#else /* !CONFIG_VIRTUAL_MEM_MAP */
if (max_low_pfn < max_dma)
zones_size[ZONE_DMA] = max_low_pfn;
else {
zones_size[ZONE_DMA] = max_dma;
zones_size[ZONE_NORMAL] = max_low_pfn - max_dma;
}
free_area_init(zones_size);
#endif /* !CONFIG_VIRTUAL_MEM_MAP */
zero_page_memmap_ptr = virt_to_page(ia64_imva(empty_zero_page));
}
This diff is collapsed.
...@@ -24,9 +24,42 @@ static long htlbpagemem; ...@@ -24,9 +24,42 @@ static long htlbpagemem;
int htlbpage_max; int htlbpage_max;
static long htlbzone_pages; static long htlbzone_pages;
static LIST_HEAD(htlbpage_freelist); static struct list_head hugepage_freelists[MAX_NUMNODES];
static spinlock_t htlbpage_lock = SPIN_LOCK_UNLOCKED; static spinlock_t htlbpage_lock = SPIN_LOCK_UNLOCKED;
static void enqueue_huge_page(struct page *page)
{
list_add(&page->list,
&hugepage_freelists[page_zone(page)->zone_pgdat->node_id]);
}
static struct page *dequeue_huge_page(void)
{
int nid = numa_node_id();
struct page *page = NULL;
if (list_empty(&hugepage_freelists[nid])) {
for (nid = 0; nid < MAX_NUMNODES; ++nid)
if (!list_empty(&hugepage_freelists[nid]))
break;
}
if (nid >= 0 && nid < MAX_NUMNODES &&
!list_empty(&hugepage_freelists[nid])) {
page = list_entry(hugepage_freelists[nid].next, struct page, list);
list_del(&page->list);
}
return page;
}
static struct page *alloc_fresh_huge_page(void)
{
static int nid = 0;
struct page *page;
page = alloc_pages_node(nid, GFP_HIGHUSER, HUGETLB_PAGE_ORDER);
nid = (nid + 1) % numnodes;
return page;
}
void free_huge_page(struct page *page); void free_huge_page(struct page *page);
static struct page *alloc_hugetlb_page(void) static struct page *alloc_hugetlb_page(void)
...@@ -35,13 +68,11 @@ static struct page *alloc_hugetlb_page(void) ...@@ -35,13 +68,11 @@ static struct page *alloc_hugetlb_page(void)
struct page *page; struct page *page;
spin_lock(&htlbpage_lock); spin_lock(&htlbpage_lock);
if (list_empty(&htlbpage_freelist)) { page = dequeue_huge_page();
if (!page) {
spin_unlock(&htlbpage_lock); spin_unlock(&htlbpage_lock);
return NULL; return NULL;
} }
page = list_entry(htlbpage_freelist.next, struct page, list);
list_del(&page->list);
htlbpagemem--; htlbpagemem--;
spin_unlock(&htlbpage_lock); spin_unlock(&htlbpage_lock);
set_page_count(page, 1); set_page_count(page, 1);
...@@ -228,7 +259,7 @@ void free_huge_page(struct page *page) ...@@ -228,7 +259,7 @@ void free_huge_page(struct page *page)
INIT_LIST_HEAD(&page->list); INIT_LIST_HEAD(&page->list);
spin_lock(&htlbpage_lock); spin_lock(&htlbpage_lock);
list_add(&page->list, &htlbpage_freelist); enqueue_huge_page(page);
htlbpagemem++; htlbpagemem++;
spin_unlock(&htlbpage_lock); spin_unlock(&htlbpage_lock);
} }
...@@ -371,7 +402,7 @@ int try_to_free_low(int count) ...@@ -371,7 +402,7 @@ int try_to_free_low(int count)
map = NULL; map = NULL;
spin_lock(&htlbpage_lock); spin_lock(&htlbpage_lock);
list_for_each(p, &htlbpage_freelist) { list_for_each(p, &hugepage_freelists[0]) {
if (map) { if (map) {
list_del(&map->list); list_del(&map->list);
update_and_free_page(map); update_and_free_page(map);
...@@ -408,11 +439,11 @@ int set_hugetlb_mem_size(int count) ...@@ -408,11 +439,11 @@ int set_hugetlb_mem_size(int count)
return (int)htlbzone_pages; return (int)htlbzone_pages;
if (lcount > 0) { /* Increase the mem size. */ if (lcount > 0) { /* Increase the mem size. */
while (lcount--) { while (lcount--) {
page = alloc_pages(__GFP_HIGHMEM, HUGETLB_PAGE_ORDER); page = alloc_fresh_huge_page();
if (page == NULL) if (page == NULL)
break; break;
spin_lock(&htlbpage_lock); spin_lock(&htlbpage_lock);
list_add(&page->list, &htlbpage_freelist); enqueue_huge_page(page);
htlbpagemem++; htlbpagemem++;
htlbzone_pages++; htlbzone_pages++;
spin_unlock(&htlbpage_lock); spin_unlock(&htlbpage_lock);
...@@ -449,17 +480,18 @@ __setup("hugepages=", hugetlb_setup); ...@@ -449,17 +480,18 @@ __setup("hugepages=", hugetlb_setup);
static int __init hugetlb_init(void) static int __init hugetlb_init(void)
{ {
int i, j; int i;
struct page *page; struct page *page;
for (i = 0; i < MAX_NUMNODES; ++i)
INIT_LIST_HEAD(&hugepage_freelists[i]);
for (i = 0; i < htlbpage_max; ++i) { for (i = 0; i < htlbpage_max; ++i) {
page = alloc_pages(__GFP_HIGHMEM, HUGETLB_PAGE_ORDER); page = alloc_fresh_huge_page();
if (!page) if (!page)
break; break;
for (j = 0; j < HPAGE_SIZE/PAGE_SIZE; ++j)
SetPageReserved(&page[j]);
spin_lock(&htlbpage_lock); spin_lock(&htlbpage_lock);
list_add(&page->list, &htlbpage_freelist); enqueue_huge_page(page);
spin_unlock(&htlbpage_lock); spin_unlock(&htlbpage_lock);
} }
htlbpage_max = htlbpagemem = htlbzone_pages = i; htlbpage_max = htlbpagemem = htlbzone_pages = i;
......
...@@ -24,6 +24,7 @@ ...@@ -24,6 +24,7 @@
#include <asm/ia32.h> #include <asm/ia32.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/machvec.h> #include <asm/machvec.h>
#include <asm/numa.h>
#include <asm/patch.h> #include <asm/patch.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/sal.h> #include <asm/sal.h>
...@@ -40,10 +41,8 @@ extern void ia64_tlb_init (void); ...@@ -40,10 +41,8 @@ extern void ia64_tlb_init (void);
unsigned long MAX_DMA_ADDRESS = PAGE_OFFSET + 0x100000000UL; unsigned long MAX_DMA_ADDRESS = PAGE_OFFSET + 0x100000000UL;
#ifdef CONFIG_VIRTUAL_MEM_MAP #ifdef CONFIG_VIRTUAL_MEM_MAP
# define LARGE_GAP 0x40000000 /* Use virtual mem map if hole is > than this */
unsigned long vmalloc_end = VMALLOC_END_INIT; unsigned long vmalloc_end = VMALLOC_END_INIT;
static struct page *vmem_map; struct page *vmem_map;
static unsigned long num_dma_physpages;
#endif #endif
static int pgt_cache_water[2] = { 25, 50 }; static int pgt_cache_water[2] = { 25, 50 };
...@@ -337,11 +336,12 @@ ia64_mmu_init (void *my_cpu_data) ...@@ -337,11 +336,12 @@ ia64_mmu_init (void *my_cpu_data)
#ifdef CONFIG_VIRTUAL_MEM_MAP #ifdef CONFIG_VIRTUAL_MEM_MAP
static int int
create_mem_map_page_table (u64 start, u64 end, void *arg) create_mem_map_page_table (u64 start, u64 end, void *arg)
{ {
unsigned long address, start_page, end_page; unsigned long address, start_page, end_page;
struct page *map_start, *map_end; struct page *map_start, *map_end;
int node;
pgd_t *pgd; pgd_t *pgd;
pmd_t *pmd; pmd_t *pmd;
pte_t *pte; pte_t *pte;
...@@ -351,19 +351,20 @@ create_mem_map_page_table (u64 start, u64 end, void *arg) ...@@ -351,19 +351,20 @@ create_mem_map_page_table (u64 start, u64 end, void *arg)
start_page = (unsigned long) map_start & PAGE_MASK; start_page = (unsigned long) map_start & PAGE_MASK;
end_page = PAGE_ALIGN((unsigned long) map_end); end_page = PAGE_ALIGN((unsigned long) map_end);
node = paddr_to_nid(__pa(start));
for (address = start_page; address < end_page; address += PAGE_SIZE) { for (address = start_page; address < end_page; address += PAGE_SIZE) {
pgd = pgd_offset_k(address); pgd = pgd_offset_k(address);
if (pgd_none(*pgd)) if (pgd_none(*pgd))
pgd_populate(&init_mm, pgd, alloc_bootmem_pages(PAGE_SIZE)); pgd_populate(&init_mm, pgd, alloc_bootmem_pages_node(NODE_DATA(node), PAGE_SIZE));
pmd = pmd_offset(pgd, address); pmd = pmd_offset(pgd, address);
if (pmd_none(*pmd)) if (pmd_none(*pmd))
pmd_populate_kernel(&init_mm, pmd, alloc_bootmem_pages(PAGE_SIZE)); pmd_populate_kernel(&init_mm, pmd, alloc_bootmem_pages_node(NODE_DATA(node), PAGE_SIZE));
pte = pte_offset_kernel(pmd, address); pte = pte_offset_kernel(pmd, address);
if (pte_none(*pte)) if (pte_none(*pte))
set_pte(pte, pfn_pte(__pa(alloc_bootmem_pages(PAGE_SIZE)) >> PAGE_SHIFT, set_pte(pte, pfn_pte(__pa(alloc_bootmem_pages_node(NODE_DATA(node), PAGE_SIZE)) >> PAGE_SHIFT,
PAGE_KERNEL)); PAGE_KERNEL));
} }
return 0; return 0;
...@@ -433,17 +434,7 @@ ia64_pfn_valid (unsigned long pfn) ...@@ -433,17 +434,7 @@ ia64_pfn_valid (unsigned long pfn)
return __get_user(byte, (char *) pfn_to_page(pfn)) == 0; return __get_user(byte, (char *) pfn_to_page(pfn)) == 0;
} }
static int int
count_dma_pages (u64 start, u64 end, void *arg)
{
unsigned long *count = arg;
if (end <= MAX_DMA_ADDRESS)
*count += (end - start) >> PAGE_SHIFT;
return 0;
}
static int
find_largest_hole (u64 start, u64 end, void *arg) find_largest_hole (u64 start, u64 end, void *arg)
{ {
u64 *max_gap = arg; u64 *max_gap = arg;
...@@ -459,103 +450,6 @@ find_largest_hole (u64 start, u64 end, void *arg) ...@@ -459,103 +450,6 @@ find_largest_hole (u64 start, u64 end, void *arg)
} }
#endif /* CONFIG_VIRTUAL_MEM_MAP */ #endif /* CONFIG_VIRTUAL_MEM_MAP */
static int
count_pages (u64 start, u64 end, void *arg)
{
unsigned long *count = arg;
*count += (end - start) >> PAGE_SHIFT;
return 0;
}
/*
* Set up the page tables.
*/
#ifdef CONFIG_DISCONTIGMEM
void
paging_init (void)
{
extern void discontig_paging_init(void);
discontig_paging_init();
efi_memmap_walk(count_pages, &num_physpages);
zero_page_memmap_ptr = virt_to_page(ia64_imva(empty_zero_page));
}
#else /* !CONFIG_DISCONTIGMEM */
void
paging_init (void)
{
unsigned long max_dma;
unsigned long zones_size[MAX_NR_ZONES];
# ifdef CONFIG_VIRTUAL_MEM_MAP
unsigned long zholes_size[MAX_NR_ZONES];
unsigned long max_gap;
# endif
/* initialize mem_map[] */
memset(zones_size, 0, sizeof(zones_size));
num_physpages = 0;
efi_memmap_walk(count_pages, &num_physpages);
max_dma = virt_to_phys((void *) MAX_DMA_ADDRESS) >> PAGE_SHIFT;
# ifdef CONFIG_VIRTUAL_MEM_MAP
memset(zholes_size, 0, sizeof(zholes_size));
num_dma_physpages = 0;
efi_memmap_walk(count_dma_pages, &num_dma_physpages);
if (max_low_pfn < max_dma) {
zones_size[ZONE_DMA] = max_low_pfn;
zholes_size[ZONE_DMA] = max_low_pfn - num_dma_physpages;
} else {
zones_size[ZONE_DMA] = max_dma;
zholes_size[ZONE_DMA] = max_dma - num_dma_physpages;
if (num_physpages > num_dma_physpages) {
zones_size[ZONE_NORMAL] = max_low_pfn - max_dma;
zholes_size[ZONE_NORMAL] = ((max_low_pfn - max_dma)
- (num_physpages - num_dma_physpages));
}
}
max_gap = 0;
efi_memmap_walk(find_largest_hole, (u64 *)&max_gap);
if (max_gap < LARGE_GAP) {
vmem_map = (struct page *) 0;
free_area_init_node(0, &contig_page_data, NULL, zones_size, 0, zholes_size);
mem_map = contig_page_data.node_mem_map;
}
else {
unsigned long map_size;
/* allocate virtual_mem_map */
map_size = PAGE_ALIGN(max_low_pfn * sizeof(struct page));
vmalloc_end -= map_size;
vmem_map = (struct page *) vmalloc_end;
efi_memmap_walk(create_mem_map_page_table, 0);
free_area_init_node(0, &contig_page_data, vmem_map, zones_size, 0, zholes_size);
mem_map = contig_page_data.node_mem_map;
printk("Virtual mem_map starts at 0x%p\n", mem_map);
}
# else /* !CONFIG_VIRTUAL_MEM_MAP */
if (max_low_pfn < max_dma)
zones_size[ZONE_DMA] = max_low_pfn;
else {
zones_size[ZONE_DMA] = max_dma;
zones_size[ZONE_NORMAL] = max_low_pfn - max_dma;
}
free_area_init(zones_size);
# endif /* !CONFIG_VIRTUAL_MEM_MAP */
zero_page_memmap_ptr = virt_to_page(ia64_imva(empty_zero_page));
}
#endif /* !CONFIG_DISCONTIGMEM */
static int static int
count_reserved_pages (u64 start, u64 end, void *arg) count_reserved_pages (u64 start, u64 end, void *arg)
{ {
......
...@@ -867,6 +867,9 @@ sn_pci_init (void) ...@@ -867,6 +867,9 @@ sn_pci_init (void)
int i = 0; int i = 0;
struct pci_controller *controller; struct pci_controller *controller;
if (!ia64_platform_is("sn2"))
return 0;
/* /*
* set pci_raw_ops, etc. * set pci_raw_ops, etc.
*/ */
......
...@@ -285,7 +285,6 @@ static cpuid_t intr_cpu_choose_node(void) ...@@ -285,7 +285,6 @@ static cpuid_t intr_cpu_choose_node(void)
cpuid_t intr_heuristic(vertex_hdl_t dev, int req_bit, int *resp_bit) cpuid_t intr_heuristic(vertex_hdl_t dev, int req_bit, int *resp_bit)
{ {
cpuid_t cpuid; cpuid_t cpuid;
cpuid_t candidate = CPU_NONE;
vertex_hdl_t pconn_vhdl; vertex_hdl_t pconn_vhdl;
pcibr_soft_t pcibr_soft; pcibr_soft_t pcibr_soft;
int bit; int bit;
...@@ -293,30 +292,32 @@ cpuid_t intr_heuristic(vertex_hdl_t dev, int req_bit, int *resp_bit) ...@@ -293,30 +292,32 @@ cpuid_t intr_heuristic(vertex_hdl_t dev, int req_bit, int *resp_bit)
/* XXX: gross layering violation.. */ /* XXX: gross layering violation.. */
if (hwgraph_edge_get(dev, EDGE_LBL_PCI, &pconn_vhdl) == GRAPH_SUCCESS) { if (hwgraph_edge_get(dev, EDGE_LBL_PCI, &pconn_vhdl) == GRAPH_SUCCESS) {
pcibr_soft = pcibr_soft_get(pconn_vhdl); pcibr_soft = pcibr_soft_get(pconn_vhdl);
if (pcibr_soft && pcibr_soft->bsi_err_intr) if (pcibr_soft && pcibr_soft->bsi_err_intr) {
candidate = ((hub_intr_t)pcibr_soft->bsi_err_intr)->i_cpuid;
}
if (candidate != CPU_NONE) {
/* /*
* The cpu was chosen already when we assigned * The cpu was chosen already when we assigned
* the error interrupt. * the error interrupt.
*/ */
bit = intr_reserve_level(candidate, req_bit); cpuid = ((hub_intr_t)pcibr_soft->bsi_err_intr)->i_cpuid;
if (bit >= 0) { goto done;
*resp_bit = bit;
return candidate;
} }
printk("Cannot target interrupt to target node (%ld).\n",candidate);
return CPU_NONE;
} }
/* /*
* Need to choose one. Try the controlling c-brick first. * Need to choose one. Try the controlling c-brick first.
*/ */
cpuid = intr_cpu_choose_from_node(master_node_get(dev)); cpuid = intr_cpu_choose_from_node(master_node_get(dev));
if (cpuid != CPU_NONE) if (cpuid == CPU_NONE)
cpuid = intr_cpu_choose_node();
done:
if (cpuid != CPU_NONE) {
bit = intr_reserve_level(cpuid, req_bit);
if (bit >= 0) {
*resp_bit = bit;
return cpuid; return cpuid;
return intr_cpu_choose_node(); }
}
printk("Cannot target interrupt to target cpu (%ld).\n", cpuid);
return CPU_NONE;
} }
...@@ -147,7 +147,6 @@ char drive_info[4*16]; ...@@ -147,7 +147,6 @@ char drive_info[4*16];
* Sets up an initial console to aid debugging. Intended primarily * Sets up an initial console to aid debugging. Intended primarily
* for bringup. See start_kernel() in init/main.c. * for bringup. See start_kernel() in init/main.c.
*/ */
#if defined(CONFIG_IA64_EARLY_PRINTK_SGI_SN) || defined(CONFIG_IA64_SGI_SN_SIM)
void __init void __init
early_sn_setup(void) early_sn_setup(void)
...@@ -189,7 +188,6 @@ early_sn_setup(void) ...@@ -189,7 +188,6 @@ early_sn_setup(void)
printk(KERN_DEBUG "early_sn_setup: setting master_node_bedrock_address to 0x%lx\n", master_node_bedrock_address); printk(KERN_DEBUG "early_sn_setup: setting master_node_bedrock_address to 0x%lx\n", master_node_bedrock_address);
} }
} }
#endif /* CONFIG_IA64_EARLY_PRINTK_SGI_SN */
#ifdef CONFIG_IA64_MCA #ifdef CONFIG_IA64_MCA
extern int platform_intr_list[]; extern int platform_intr_list[];
......
...@@ -11,6 +11,8 @@ ...@@ -11,6 +11,8 @@
#include <asm/sn/sn2/io.h> #include <asm/sn/sn2/io.h>
#ifdef CONFIG_IA64_GENERIC
#undef __sn_inb #undef __sn_inb
#undef __sn_inw #undef __sn_inw
#undef __sn_inl #undef __sn_inl
...@@ -81,3 +83,5 @@ __sn_readq (void *addr) ...@@ -81,3 +83,5 @@ __sn_readq (void *addr)
{ {
return ___sn_readq (addr); return ___sn_readq (addr);
} }
#endif
...@@ -125,6 +125,7 @@ csum_partial (const unsigned char *buff, int len, unsigned int sum) ...@@ -125,6 +125,7 @@ csum_partial (const unsigned char *buff, int len, unsigned int sum)
return(sum); return(sum);
} }
EXPORT_SYMBOL(csum_partial);
/* /*
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
*/ */
#include <linux/types.h> #include <linux/types.h>
#include <linux/compiler.h>
#include <linux/init.h> #include <linux/init.h>
int __init sbus_init(void) int __init sbus_init(void)
......
...@@ -23,11 +23,10 @@ map(unsigned int phys, unsigned int virt, unsigned int size) ...@@ -23,11 +23,10 @@ map(unsigned int phys, unsigned int virt, unsigned int size)
char *method; char *method;
ihandle mmu_ihandle; ihandle mmu_ihandle;
int misc; int misc;
unsigned int phys;
unsigned int virt;
unsigned int size; unsigned int size;
unsigned int virt;
unsigned int phys;
int ret0; int ret0;
int ret1;
} args; } args;
if (of_prom_mmu == 0) { if (of_prom_mmu == 0) {
...@@ -36,10 +35,10 @@ map(unsigned int phys, unsigned int virt, unsigned int size) ...@@ -36,10 +35,10 @@ map(unsigned int phys, unsigned int virt, unsigned int size)
} }
args.service = "call-method"; args.service = "call-method";
args.nargs = 6; args.nargs = 6;
args.nret = 2; args.nret = 1;
args.method = "map"; args.method = "map";
args.mmu_ihandle = of_prom_mmu; args.mmu_ihandle = of_prom_mmu;
args.misc = -1; args.misc = 0;
args.phys = phys; args.phys = phys;
args.virt = virt; args.virt = virt;
args.size = size; args.size = size;
......
...@@ -38,9 +38,9 @@ static char heap[SCRATCH_SIZE]; ...@@ -38,9 +38,9 @@ static char heap[SCRATCH_SIZE];
static unsigned long ram_start = 0; static unsigned long ram_start = 0;
static unsigned long ram_end = 0x1000000; static unsigned long ram_end = 0x1000000;
static unsigned long prog_start = 0x800000;
static unsigned long prog_size = 0x800000;
static unsigned long prog_start = 0x900000;
static unsigned long prog_size = 0x700000;
typedef void (*kernel_start_t)(int, int, void *); typedef void (*kernel_start_t)(int, int, void *);
......
...@@ -181,26 +181,8 @@ __after_mmu_off: ...@@ -181,26 +181,8 @@ __after_mmu_off:
bl setup_disp_bat bl setup_disp_bat
#endif #endif
#else /* CONFIG_POWER4 */ #else /* CONFIG_POWER4 */
/*
* Load up the SDR1 and segment register values now
* since we don't have the BATs.
* Also make sure we are running in 32-bit mode.
*/
bl reloc_offset bl reloc_offset
addis r14,r3,_SDR1@ha /* get the value from _SDR1 */ bl initial_mm_power4
lwz r14,_SDR1@l(r14) /* assume hash table below 4GB */
mtspr SDR1,r14
slbia
lis r4,0x2000 /* set pseudo-segment reg 12 */
ori r5,r4,0x0ccc
mtsr 12,r5
ori r4,r4,0x0888 /* set pseudo-segment reg 8 */
mtsr 8,r4 /* (for access to serial port) */
mfmsr r0
clrldi r0,r0,1
sync
mtmsr r0
isync
#endif /* CONFIG_POWER4 */ #endif /* CONFIG_POWER4 */
/* /*
...@@ -1637,6 +1619,34 @@ setup_disp_bat: ...@@ -1637,6 +1619,34 @@ setup_disp_bat:
#endif /* !defined(CONFIG_APUS) && defined(CONFIG_BOOTX_TEXT) */ #endif /* !defined(CONFIG_APUS) && defined(CONFIG_BOOTX_TEXT) */
#else /* CONFIG_POWER4 */ #else /* CONFIG_POWER4 */
/*
* Load up the SDR1 and segment register values now
* since we don't have the BATs.
* Also make sure we are running in 32-bit mode.
*/
initial_mm_power4:
addis r14,r3,_SDR1@ha /* get the value from _SDR1 */
lwz r14,_SDR1@l(r14) /* assume hash table below 4GB */
mtspr SDR1,r14
slbia
lis r4,0x2000 /* set pseudo-segment reg 12 */
ori r5,r4,0x0ccc
mtsr 12,r5
ori r5,r4,0x0888 /* set pseudo-segment reg 8 */
mtsr 8,r5 /* (for access to serial port) */
ori r5,r4,0x0999 /* set pseudo-segment reg 8 */
mtsr 9,r5 /* (for access to screen) */
mfmsr r0
clrldi r0,r0,1
sync
mtmsr r0
isync
blr
/*
* On 970 (G5), we pre-set a few bits in HID0 & HID1
*/
ppc970_setup_hid: ppc970_setup_hid:
li r0,0 li r0,0
sync sync
......
...@@ -367,11 +367,17 @@ EXPORT_SYMBOL(next_mmu_context); ...@@ -367,11 +367,17 @@ EXPORT_SYMBOL(next_mmu_context);
EXPORT_SYMBOL(set_context); EXPORT_SYMBOL(set_context);
EXPORT_SYMBOL(handle_mm_fault); /* For MOL */ EXPORT_SYMBOL(handle_mm_fault); /* For MOL */
EXPORT_SYMBOL_NOVERS(disarm_decr); EXPORT_SYMBOL_NOVERS(disarm_decr);
extern long mol_trampoline;
EXPORT_SYMBOL(mol_trampoline); /* For MOL */
#ifdef CONFIG_PPC_STD_MMU #ifdef CONFIG_PPC_STD_MMU
EXPORT_SYMBOL(flush_hash_pages); /* For MOL */ EXPORT_SYMBOL(flush_hash_pages); /* For MOL */
#ifdef CONFIG_SMP
extern int mmu_hash_lock;
EXPORT_SYMBOL(mmu_hash_lock); /* For MOL */
#endif /* CONFIG_SMP */
extern long *intercept_table; extern long *intercept_table;
EXPORT_SYMBOL(intercept_table); EXPORT_SYMBOL(intercept_table);
#endif #endif /* CONFIG_PPC_STD_MMU */
EXPORT_SYMBOL(cur_cpu_spec); EXPORT_SYMBOL(cur_cpu_spec);
#ifdef CONFIG_PPC_PMAC #ifdef CONFIG_PPC_PMAC
extern unsigned long agp_special_page; extern unsigned long agp_special_page;
......
...@@ -56,6 +56,7 @@ static struct files_struct init_files = INIT_FILES; ...@@ -56,6 +56,7 @@ static struct files_struct init_files = INIT_FILES;
static struct signal_struct init_signals = INIT_SIGNALS(init_signals); static struct signal_struct init_signals = INIT_SIGNALS(init_signals);
static struct sighand_struct init_sighand = INIT_SIGHAND(init_sighand); static struct sighand_struct init_sighand = INIT_SIGHAND(init_sighand);
struct mm_struct init_mm = INIT_MM(init_mm); struct mm_struct init_mm = INIT_MM(init_mm);
EXPORT_SYMBOL(init_mm);
/* this is 8kB-aligned so we can get to the thread_info struct /* this is 8kB-aligned so we can get to the thread_info struct
at the base of it from the stack pointer with 1 integer instruction. */ at the base of it from the stack pointer with 1 integer instruction. */
......
...@@ -417,6 +417,21 @@ _GLOBAL(hash_page_patch_C) ...@@ -417,6 +417,21 @@ _GLOBAL(hash_page_patch_C)
lwz r6,next_slot@l(r4) lwz r6,next_slot@l(r4)
addi r6,r6,PTE_SIZE addi r6,r6,PTE_SIZE
andi. r6,r6,7*PTE_SIZE andi. r6,r6,7*PTE_SIZE
#ifdef CONFIG_POWER4
/*
* Since we don't have BATs on POWER4, we rely on always having
* PTEs in the hash table to map the hash table and the code
* that manipulates it in virtual mode, namely flush_hash_page and
* flush_hash_segments. Otherwise we can get a DSI inside those
* routines which leads to a deadlock on the hash_table_lock on
* SMP machines. We avoid this by never overwriting the first
* PTE of each PTEG if it is already valid.
* -- paulus.
*/
bne 102f
li r6,PTE_SIZE
102:
#endif /* CONFIG_POWER4 */
stw r6,next_slot@l(r4) stw r6,next_slot@l(r4)
add r4,r3,r6 add r4,r3,r6
......
...@@ -211,6 +211,17 @@ void __init MMU_init_hw(void) ...@@ -211,6 +211,17 @@ void __init MMU_init_hw(void)
#define MIN_N_HPTEG 1024 /* min 64kB hash table */ #define MIN_N_HPTEG 1024 /* min 64kB hash table */
#endif #endif
#ifdef CONFIG_POWER4
/* The hash table has already been allocated and initialized
in prom.c */
n_hpteg = Hash_size >> LG_HPTEG_SIZE;
lg_n_hpteg = __ilog2(n_hpteg);
/* Remove the hash table from the available memory */
if (Hash)
reserve_phys_mem(__pa(Hash), Hash_size);
#else /* CONFIG_POWER4 */
/* /*
* Allow 1 HPTE (1/8 HPTEG) for each page of memory. * Allow 1 HPTE (1/8 HPTEG) for each page of memory.
* This is less than the recommended amount, but then * This is less than the recommended amount, but then
...@@ -224,13 +235,7 @@ void __init MMU_init_hw(void) ...@@ -224,13 +235,7 @@ void __init MMU_init_hw(void)
++lg_n_hpteg; /* round up if not power of 2 */ ++lg_n_hpteg; /* round up if not power of 2 */
n_hpteg = 1 << lg_n_hpteg; n_hpteg = 1 << lg_n_hpteg;
} }
Hash_size = n_hpteg << LG_HPTEG_SIZE; Hash_size = n_hpteg << LG_HPTEG_SIZE;
Hash_mask = n_hpteg - 1;
hmask = Hash_mask >> (16 - LG_HPTEG_SIZE);
mb2 = mb = 32 - LG_HPTEG_SIZE - lg_n_hpteg;
if (lg_n_hpteg > 16)
mb2 = 16 - LG_HPTEG_SIZE;
/* /*
* Find some memory for the hash table. * Find some memory for the hash table.
...@@ -240,6 +245,7 @@ void __init MMU_init_hw(void) ...@@ -240,6 +245,7 @@ void __init MMU_init_hw(void)
cacheable_memzero(Hash, Hash_size); cacheable_memzero(Hash, Hash_size);
_SDR1 = __pa(Hash) | SDR1_LOW_BITS; _SDR1 = __pa(Hash) | SDR1_LOW_BITS;
Hash_end = (PTE *) ((unsigned long)Hash + Hash_size); Hash_end = (PTE *) ((unsigned long)Hash + Hash_size);
#endif /* CONFIG_POWER4 */
printk("Total memory = %ldMB; using %ldkB for hash table (at %p)\n", printk("Total memory = %ldMB; using %ldkB for hash table (at %p)\n",
total_memory >> 20, Hash_size >> 10, Hash); total_memory >> 20, Hash_size >> 10, Hash);
...@@ -249,6 +255,12 @@ void __init MMU_init_hw(void) ...@@ -249,6 +255,12 @@ void __init MMU_init_hw(void)
* Patch up the instructions in hashtable.S:create_hpte * Patch up the instructions in hashtable.S:create_hpte
*/ */
if ( ppc_md.progress ) ppc_md.progress("hash:patch", 0x345); if ( ppc_md.progress ) ppc_md.progress("hash:patch", 0x345);
Hash_mask = n_hpteg - 1;
hmask = Hash_mask >> (16 - LG_HPTEG_SIZE);
mb2 = mb = 32 - LG_HPTEG_SIZE - lg_n_hpteg;
if (lg_n_hpteg > 16)
mb2 = 16 - LG_HPTEG_SIZE;
hash_page_patch_A[0] = (hash_page_patch_A[0] & ~0xffff) hash_page_patch_A[0] = (hash_page_patch_A[0] & ~0xffff)
| ((unsigned int)(Hash) >> 16); | ((unsigned int)(Hash) >> 16);
hash_page_patch_A[1] = (hash_page_patch_A[1] & ~0x7c0) | (mb << 6); hash_page_patch_A[1] = (hash_page_patch_A[1] & ~0x7c0) | (mb << 6);
......
This diff is collapsed.
...@@ -166,7 +166,7 @@ via_calibrate_decr(void) ...@@ -166,7 +166,7 @@ via_calibrate_decr(void)
{ {
struct device_node *vias; struct device_node *vias;
volatile unsigned char *via; volatile unsigned char *via;
int count = VIA_TIMER_FREQ_6 / HZ; int count = VIA_TIMER_FREQ_6 / 100;
unsigned int dstart, dend; unsigned int dstart, dend;
vias = find_devices("via-cuda"); vias = find_devices("via-cuda");
...@@ -196,7 +196,7 @@ via_calibrate_decr(void) ...@@ -196,7 +196,7 @@ via_calibrate_decr(void)
; ;
dend = get_dec(); dend = get_dec();
tb_ticks_per_jiffy = (dstart - dend) / 6; tb_ticks_per_jiffy = (dstart - dend) / (6 * (HZ/100));
tb_to_us = mulhwu_scale_factor(dstart - dend, 60000); tb_to_us = mulhwu_scale_factor(dstart - dend, 60000);
printk(KERN_INFO "via_calibrate_decr: ticks per jiffy = %u (%u ticks)\n", printk(KERN_INFO "via_calibrate_decr: ticks per jiffy = %u (%u ticks)\n",
...@@ -260,7 +260,9 @@ pmac_calibrate_decr(void) ...@@ -260,7 +260,9 @@ pmac_calibrate_decr(void)
* calibration. That's better since the VIA itself seems * calibration. That's better since the VIA itself seems
* to be slightly off. --BenH * to be slightly off. --BenH
*/ */
if (!machine_is_compatible("MacRISC2")) if (!machine_is_compatible("MacRISC2") &&
!machine_is_compatible("MacRISC3") &&
!machine_is_compatible("MacRISC4"))
if (via_calibrate_decr()) if (via_calibrate_decr())
return; return;
......
...@@ -97,9 +97,9 @@ static int of_device_probe(struct device *dev) ...@@ -97,9 +97,9 @@ static int of_device_probe(struct device *dev)
static int of_device_remove(struct device *dev) static int of_device_remove(struct device *dev)
{ {
struct of_device * of_dev = to_of_device(dev); struct of_device * of_dev = to_of_device(dev);
struct of_platform_driver * drv = to_of_platform_driver(of_dev->dev.driver); struct of_platform_driver * drv = to_of_platform_driver(dev->driver);
if (drv && drv->remove) if (dev->driver && drv->remove)
drv->remove(of_dev); drv->remove(of_dev);
return 0; return 0;
} }
...@@ -107,10 +107,10 @@ static int of_device_remove(struct device *dev) ...@@ -107,10 +107,10 @@ static int of_device_remove(struct device *dev)
static int of_device_suspend(struct device *dev, u32 state) static int of_device_suspend(struct device *dev, u32 state)
{ {
struct of_device * of_dev = to_of_device(dev); struct of_device * of_dev = to_of_device(dev);
struct of_platform_driver * drv = to_of_platform_driver(of_dev->dev.driver); struct of_platform_driver * drv = to_of_platform_driver(dev->driver);
int error = 0; int error = 0;
if (drv && drv->suspend) if (dev->driver && drv->suspend)
error = drv->suspend(of_dev, state); error = drv->suspend(of_dev, state);
return error; return error;
} }
...@@ -118,10 +118,10 @@ static int of_device_suspend(struct device *dev, u32 state) ...@@ -118,10 +118,10 @@ static int of_device_suspend(struct device *dev, u32 state)
static int of_device_resume(struct device * dev) static int of_device_resume(struct device * dev)
{ {
struct of_device * of_dev = to_of_device(dev); struct of_device * of_dev = to_of_device(dev);
struct of_platform_driver * drv = to_of_platform_driver(of_dev->dev.driver); struct of_platform_driver * drv = to_of_platform_driver(dev->driver);
int error = 0; int error = 0;
if (drv && drv->resume) if (dev->driver && drv->resume)
error = drv->resume(of_dev); error = drv->resume(of_dev);
return error; return error;
} }
......
...@@ -937,7 +937,6 @@ _GLOBAL(do_stab_bolted) ...@@ -937,7 +937,6 @@ _GLOBAL(do_stab_bolted)
mfspr r20,SPRG2 mfspr r20,SPRG2
mfspr r21,SPRG1 mfspr r21,SPRG1
rfid rfid
_TRACEBACK(do_stab_bolted)
/* /*
* r20 points to the PACA, r21 to the exception frame, * r20 points to the PACA, r21 to the exception frame,
...@@ -1052,7 +1051,6 @@ SLB_NUM_ENTRIES = 64 ...@@ -1052,7 +1051,6 @@ SLB_NUM_ENTRIES = 64
mfspr r20,SPRG2 mfspr r20,SPRG2
mfspr r21,SPRG1 mfspr r21,SPRG1
rfid rfid
_TRACEBACK(do_slb_bolted)
_GLOBAL(do_stab_SI) _GLOBAL(do_stab_SI)
mflr r21 /* Save LR in r21 */ mflr r21 /* Save LR in r21 */
......
...@@ -84,8 +84,6 @@ int cpu_idle(void) ...@@ -84,8 +84,6 @@ int cpu_idle(void)
lpaca = get_paca(); lpaca = get_paca();
while (1) { while (1) {
irq_stat[smp_processor_id()].idle_timestamp = jiffies;
if (lpaca->xLpPaca.xSharedProc) { if (lpaca->xLpPaca.xSharedProc) {
if (ItLpQueue_isLpIntPending(lpaca->lpQueuePtr)) if (ItLpQueue_isLpIntPending(lpaca->lpQueuePtr))
process_iSeries_events(); process_iSeries_events();
...@@ -125,7 +123,6 @@ int cpu_idle(void) ...@@ -125,7 +123,6 @@ int cpu_idle(void)
long oldval; long oldval;
while (1) { while (1) {
irq_stat[smp_processor_id()].idle_timestamp = jiffies;
oldval = test_and_clear_thread_flag(TIF_NEED_RESCHED); oldval = test_and_clear_thread_flag(TIF_NEED_RESCHED);
if (!oldval) { if (!oldval) {
......
This diff is collapsed.
...@@ -359,7 +359,7 @@ _GLOBAL(_outsl) ...@@ -359,7 +359,7 @@ _GLOBAL(_outsl)
bdnz 00b bdnz 00b
blr blr
_GLOBAL(ide_insw) /* _GLOBAL(ide_insw) now in drivers/ide/ide-iops.c */
_GLOBAL(_insw_ns) _GLOBAL(_insw_ns)
cmpwi 0,r5,0 cmpwi 0,r5,0
mtctr r5 mtctr r5
...@@ -371,7 +371,7 @@ _GLOBAL(_insw_ns) ...@@ -371,7 +371,7 @@ _GLOBAL(_insw_ns)
bdnz 00b bdnz 00b
blr blr
_GLOBAL(ide_outsw) /* _GLOBAL(ide_outsw) now in drivers/ide/ide-iops.c */
_GLOBAL(_outsw_ns) _GLOBAL(_outsw_ns)
cmpwi 0,r5,0 cmpwi 0,r5,0
mtctr r5 mtctr r5
...@@ -742,7 +742,7 @@ _GLOBAL(sys_call_table32) ...@@ -742,7 +742,7 @@ _GLOBAL(sys_call_table32)
.llong .sys32_getdents .llong .sys32_getdents
.llong .ppc32_select .llong .ppc32_select
.llong .sys_flock .llong .sys_flock
.llong .sys32_msync .llong .sys_msync
.llong .sys32_readv /* 145 */ .llong .sys32_readv /* 145 */
.llong .sys32_writev .llong .sys32_writev
.llong .sys32_getsid .llong .sys32_getsid
...@@ -750,7 +750,7 @@ _GLOBAL(sys_call_table32) ...@@ -750,7 +750,7 @@ _GLOBAL(sys_call_table32)
.llong .sys32_sysctl .llong .sys32_sysctl
.llong .sys_mlock /* 150 */ .llong .sys_mlock /* 150 */
.llong .sys_munlock .llong .sys_munlock
.llong .sys32_mlockall .llong .sys_mlockall
.llong .sys_munlockall .llong .sys_munlockall
.llong .sys32_sched_setparam .llong .sys32_sched_setparam
.llong .sys32_sched_getparam /* 155 */ .llong .sys32_sched_getparam /* 155 */
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment