Commit 80c55208 authored by Thomas Gleixner's avatar Thomas Gleixner

Merge branch 'cpus4096' into irq/threaded

Conflicts:
	arch/parisc/kernel/irq.c
	kernel/irq/handle.c
Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
parents b3e3b302 8c083f08
...@@ -18,11 +18,11 @@ For an architecture to support this feature, it must define some of ...@@ -18,11 +18,11 @@ For an architecture to support this feature, it must define some of
these macros in include/asm-XXX/topology.h: these macros in include/asm-XXX/topology.h:
#define topology_physical_package_id(cpu) #define topology_physical_package_id(cpu)
#define topology_core_id(cpu) #define topology_core_id(cpu)
#define topology_thread_siblings(cpu) #define topology_thread_cpumask(cpu)
#define topology_core_siblings(cpu) #define topology_core_cpumask(cpu)
The type of **_id is int. The type of **_id is int.
The type of siblings is cpumask_t. The type of siblings is (const) struct cpumask *.
To be consistent on all architectures, include/linux/topology.h To be consistent on all architectures, include/linux/topology.h
provides default definitions for any of the above macros that are provides default definitions for any of the above macros that are
......
...@@ -1310,8 +1310,13 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -1310,8 +1310,13 @@ and is between 256 and 4096 characters. It is defined in the file
memtest= [KNL,X86] Enable memtest memtest= [KNL,X86] Enable memtest
Format: <integer> Format: <integer>
range: 0,4 : pattern number
default : 0 <disable> default : 0 <disable>
Specifies the number of memtest passes to be
performed. Each pass selects another test
pattern from a given set of patterns. Memtest
fills the memory with this pattern, validates
memory contents and reserves bad memory
regions that are detected.
meye.*= [HW] Set MotionEye Camera parameters meye.*= [HW] Set MotionEye Camera parameters
See Documentation/video4linux/meye.txt. See Documentation/video4linux/meye.txt.
......
...@@ -158,7 +158,7 @@ Offset Proto Name Meaning ...@@ -158,7 +158,7 @@ Offset Proto Name Meaning
0202/4 2.00+ header Magic signature "HdrS" 0202/4 2.00+ header Magic signature "HdrS"
0206/2 2.00+ version Boot protocol version supported 0206/2 2.00+ version Boot protocol version supported
0208/4 2.00+ realmode_swtch Boot loader hook (see below) 0208/4 2.00+ realmode_swtch Boot loader hook (see below)
020C/2 2.00+ start_sys The load-low segment (0x1000) (obsolete) 020C/2 2.00+ start_sys_seg The load-low segment (0x1000) (obsolete)
020E/2 2.00+ kernel_version Pointer to kernel version string 020E/2 2.00+ kernel_version Pointer to kernel version string
0210/1 2.00+ type_of_loader Boot loader identifier 0210/1 2.00+ type_of_loader Boot loader identifier
0211/1 2.00+ loadflags Boot protocol option flags 0211/1 2.00+ loadflags Boot protocol option flags
...@@ -170,10 +170,11 @@ Offset Proto Name Meaning ...@@ -170,10 +170,11 @@ Offset Proto Name Meaning
0224/2 2.01+ heap_end_ptr Free memory after setup end 0224/2 2.01+ heap_end_ptr Free memory after setup end
0226/2 N/A pad1 Unused 0226/2 N/A pad1 Unused
0228/4 2.02+ cmd_line_ptr 32-bit pointer to the kernel command line 0228/4 2.02+ cmd_line_ptr 32-bit pointer to the kernel command line
022C/4 2.03+ initrd_addr_max Highest legal initrd address 022C/4 2.03+ ramdisk_max Highest legal initrd address
0230/4 2.05+ kernel_alignment Physical addr alignment required for kernel 0230/4 2.05+ kernel_alignment Physical addr alignment required for kernel
0234/1 2.05+ relocatable_kernel Whether kernel is relocatable or not 0234/1 2.05+ relocatable_kernel Whether kernel is relocatable or not
0235/3 N/A pad2 Unused 0235/1 N/A pad2 Unused
0236/2 N/A pad3 Unused
0238/4 2.06+ cmdline_size Maximum size of the kernel command line 0238/4 2.06+ cmdline_size Maximum size of the kernel command line
023C/4 2.07+ hardware_subarch Hardware subarchitecture 023C/4 2.07+ hardware_subarch Hardware subarchitecture
0240/8 2.07+ hardware_subarch_data Subarchitecture-specific data 0240/8 2.07+ hardware_subarch_data Subarchitecture-specific data
...@@ -299,14 +300,14 @@ Protocol: 2.00+ ...@@ -299,14 +300,14 @@ Protocol: 2.00+
e.g. 0x0204 for version 2.04, and 0x0a11 for a hypothetical version e.g. 0x0204 for version 2.04, and 0x0a11 for a hypothetical version
10.17. 10.17.
Field name: readmode_swtch Field name: realmode_swtch
Type: modify (optional) Type: modify (optional)
Offset/size: 0x208/4 Offset/size: 0x208/4
Protocol: 2.00+ Protocol: 2.00+
Boot loader hook (see ADVANCED BOOT LOADER HOOKS below.) Boot loader hook (see ADVANCED BOOT LOADER HOOKS below.)
Field name: start_sys Field name: start_sys_seg
Type: read Type: read
Offset/size: 0x20c/2 Offset/size: 0x20c/2
Protocol: 2.00+ Protocol: 2.00+
...@@ -468,7 +469,7 @@ Protocol: 2.02+ ...@@ -468,7 +469,7 @@ Protocol: 2.02+
zero, the kernel will assume that your boot loader does not support zero, the kernel will assume that your boot loader does not support
the 2.02+ protocol. the 2.02+ protocol.
Field name: initrd_addr_max Field name: ramdisk_max
Type: read Type: read
Offset/size: 0x22c/4 Offset/size: 0x22c/4
Protocol: 2.03+ Protocol: 2.03+
...@@ -542,7 +543,10 @@ Protocol: 2.08+ ...@@ -542,7 +543,10 @@ Protocol: 2.08+
The payload may be compressed. The format of both the compressed and The payload may be compressed. The format of both the compressed and
uncompressed data should be determined using the standard magic uncompressed data should be determined using the standard magic
numbers. Currently only gzip compressed ELF is used. numbers. The currently supported compression formats are gzip
(magic numbers 1F 8B or 1F 9E), bzip2 (magic number 42 5A) and LZMA
(magic number 5D 00). The uncompressed payload is currently always ELF
(magic number 7F 45 4C 46).
Field name: payload_length Field name: payload_length
Type: read Type: read
......
Mini-HOWTO for using the earlyprintk=dbgp boot option with a
USB2 Debug port key and a debug cable, on x86 systems.
You need two computers, the 'USB debug key' special gadget and
and two USB cables, connected like this:
[host/target] <-------> [USB debug key] <-------> [client/console]
1. There are three specific hardware requirements:
a.) Host/target system needs to have USB debug port capability.
You can check this capability by looking at a 'Debug port' bit in
the lspci -vvv output:
# lspci -vvv
...
00:1d.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 03) (prog-if 20 [EHCI])
Subsystem: Lenovo ThinkPad T61
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin D routed to IRQ 19
Region 0: Memory at fe227000 (32-bit, non-prefetchable) [size=1K]
Capabilities: [50] Power Management version 2
Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 PME-Enable- DSel=0 DScale=0 PME+
Capabilities: [58] Debug port: BAR=1 offset=00a0
^^^^^^^^^^^ <==================== [ HERE ]
Kernel driver in use: ehci_hcd
Kernel modules: ehci-hcd
...
( If your system does not list a debug port capability then you probably
wont be able to use the USB debug key. )
b.) You also need a Netchip USB debug cable/key:
http://www.plxtech.com/products/NET2000/NET20DC/default.asp
This is a small blue plastic connector with two USB connections,
it draws power from its USB connections.
c.) Thirdly, you need a second client/console system with a regular USB port.
2. Software requirements:
a.) On the host/target system:
You need to enable the following kernel config option:
CONFIG_EARLY_PRINTK_DBGP=y
And you need to add the boot command line: "earlyprintk=dbgp".
(If you are using Grub, append it to the 'kernel' line in
/etc/grub.conf)
NOTE: normally earlyprintk console gets turned off once the
regular console is alive - use "earlyprintk=dbgp,keep" to keep
this channel open beyond early bootup. This can be useful for
debugging crashes under Xorg, etc.
b.) On the client/console system:
You should enable the following kernel config option:
CONFIG_USB_SERIAL_DEBUG=y
On the next bootup with the modified kernel you should
get a /dev/ttyUSBx device(s).
Now this channel of kernel messages is ready to be used: start
your favorite terminal emulator (minicom, etc.) and set
it up to use /dev/ttyUSB0 - or use a raw 'cat /dev/ttyUSBx' to
see the raw output.
c.) On Nvidia Southbridge based systems: the kernel will try to probe
and find out which port has debug device connected.
3. Testing that it works fine:
You can test the output by using earlyprintk=dbgp,keep and provoking
kernel messages on the host/target system. You can provoke a harmless
kernel message by for example doing:
echo h > /proc/sysrq-trigger
On the host/target system you should see this help line in "dmesg" output:
SysRq : HELP : loglevel(0-9) reBoot Crashdump terminate-all-tasks(E) memory-full-oom-kill(F) kill-all-tasks(I) saK show-backtrace-all-active-cpus(L) show-memory-usage(M) nice-all-RT-tasks(N) powerOff show-registers(P) show-all-timers(Q) unRaw Sync show-task-states(T) Unmount show-blocked-tasks(W) dump-ftrace-buffer(Z)
On the client/console system do:
cat /dev/ttyUSB0
And you should see the help line above displayed shortly after you've
provoked it on the host system.
If it does not work then please ask about it on the linux-kernel@vger.kernel.org
mailing list or contact the x86 maintainers.
...@@ -533,8 +533,9 @@ KBUILD_CFLAGS += $(call cc-option,-Wframe-larger-than=${CONFIG_FRAME_WARN}) ...@@ -533,8 +533,9 @@ KBUILD_CFLAGS += $(call cc-option,-Wframe-larger-than=${CONFIG_FRAME_WARN})
endif endif
# Force gcc to behave correct even for buggy distributions # Force gcc to behave correct even for buggy distributions
# Arch Makefiles may override this setting ifndef CONFIG_CC_STACKPROTECTOR
KBUILD_CFLAGS += $(call cc-option, -fno-stack-protector) KBUILD_CFLAGS += $(call cc-option, -fno-stack-protector)
endif
ifdef CONFIG_FRAME_POINTER ifdef CONFIG_FRAME_POINTER
KBUILD_CFLAGS += -fno-omit-frame-pointer -fno-optimize-sibling-calls KBUILD_CFLAGS += -fno-omit-frame-pointer -fno-optimize-sibling-calls
......
#ifndef _ALPHA_STATFS_H #ifndef _ALPHA_STATFS_H
#define _ALPHA_STATFS_H #define _ALPHA_STATFS_H
#include <linux/types.h>
/* Alpha is the only 64-bit platform with 32-bit statfs. And doesn't /* Alpha is the only 64-bit platform with 32-bit statfs. And doesn't
even seem to implement statfs64 */ even seem to implement statfs64 */
#define __statfs_word __u32 #define __statfs_word __u32
......
#ifndef _ALPHA_SWAB_H #ifndef _ALPHA_SWAB_H
#define _ALPHA_SWAB_H #define _ALPHA_SWAB_H
#include <asm/types.h> #include <linux/types.h>
#include <linux/compiler.h> #include <linux/compiler.h>
#include <asm/compiler.h> #include <asm/compiler.h>
......
...@@ -55,7 +55,7 @@ int irq_select_affinity(unsigned int irq) ...@@ -55,7 +55,7 @@ int irq_select_affinity(unsigned int irq)
cpu = (cpu < (NR_CPUS-1) ? cpu + 1 : 0); cpu = (cpu < (NR_CPUS-1) ? cpu + 1 : 0);
last_cpu = cpu; last_cpu = cpu;
irq_desc[irq].affinity = cpumask_of_cpu(cpu); cpumask_copy(irq_desc[irq].affinity, cpumask_of(cpu));
irq_desc[irq].chip->set_affinity(irq, cpumask_of(cpu)); irq_desc[irq].chip->set_affinity(irq, cpumask_of(cpu));
return 0; return 0;
} }
......
...@@ -189,9 +189,21 @@ callback_init(void * kernel_end) ...@@ -189,9 +189,21 @@ callback_init(void * kernel_end)
if (alpha_using_srm) { if (alpha_using_srm) {
static struct vm_struct console_remap_vm; static struct vm_struct console_remap_vm;
unsigned long vaddr = VMALLOC_START; unsigned long nr_pages = 0;
unsigned long vaddr;
unsigned long i, j; unsigned long i, j;
/* calculate needed size */
for (i = 0; i < crb->map_entries; ++i)
nr_pages += crb->map[i].count;
/* register the vm area */
console_remap_vm.flags = VM_ALLOC;
console_remap_vm.size = nr_pages << PAGE_SHIFT;
vm_area_register_early(&console_remap_vm, PAGE_SIZE);
vaddr = (unsigned long)console_remap_vm.addr;
/* Set up the third level PTEs and update the virtual /* Set up the third level PTEs and update the virtual
addresses of the CRB entries. */ addresses of the CRB entries. */
for (i = 0; i < crb->map_entries; ++i) { for (i = 0; i < crb->map_entries; ++i) {
...@@ -213,12 +225,6 @@ callback_init(void * kernel_end) ...@@ -213,12 +225,6 @@ callback_init(void * kernel_end)
vaddr += PAGE_SIZE; vaddr += PAGE_SIZE;
} }
} }
/* Let vmalloc know that we've allocated some space. */
console_remap_vm.flags = VM_ALLOC;
console_remap_vm.addr = (void *) VMALLOC_START;
console_remap_vm.size = vaddr - VMALLOC_START;
vmlist = &console_remap_vm;
} }
callback_init_done = 1; callback_init_done = 1;
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
#define __ARM_A_OUT_H__ #define __ARM_A_OUT_H__
#include <linux/personality.h> #include <linux/personality.h>
#include <asm/types.h> #include <linux/types.h>
struct exec struct exec
{ {
......
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
#ifndef __ASMARM_SETUP_H #ifndef __ASMARM_SETUP_H
#define __ASMARM_SETUP_H #define __ASMARM_SETUP_H
#include <asm/types.h> #include <linux/types.h>
#define COMMAND_LINE_SIZE 1024 #define COMMAND_LINE_SIZE 1024
......
...@@ -16,7 +16,7 @@ ...@@ -16,7 +16,7 @@
#define __ASM_ARM_SWAB_H #define __ASM_ARM_SWAB_H
#include <linux/compiler.h> #include <linux/compiler.h>
#include <asm/types.h> #include <linux/types.h>
#if !defined(__STRICT_ANSI__) || defined(__KERNEL__) #if !defined(__STRICT_ANSI__) || defined(__KERNEL__)
# define __SWAB_64_THRU_32__ # define __SWAB_64_THRU_32__
......
...@@ -104,6 +104,11 @@ static struct irq_desc bad_irq_desc = { ...@@ -104,6 +104,11 @@ static struct irq_desc bad_irq_desc = {
.lock = __SPIN_LOCK_UNLOCKED(bad_irq_desc.lock), .lock = __SPIN_LOCK_UNLOCKED(bad_irq_desc.lock),
}; };
#ifdef CONFIG_CPUMASK_OFFSTACK
/* We are not allocating bad_irq_desc.affinity or .pending_mask */
#error "ARM architecture does not support CONFIG_CPUMASK_OFFSTACK."
#endif
/* /*
* do_IRQ handles all hardware IRQ's. Decoded IRQs should not * do_IRQ handles all hardware IRQ's. Decoded IRQs should not
* come via this function. Instead, they should provide their * come via this function. Instead, they should provide their
...@@ -161,7 +166,7 @@ void __init init_IRQ(void) ...@@ -161,7 +166,7 @@ void __init init_IRQ(void)
irq_desc[irq].status |= IRQ_NOREQUEST | IRQ_NOPROBE; irq_desc[irq].status |= IRQ_NOREQUEST | IRQ_NOPROBE;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
bad_irq_desc.affinity = CPU_MASK_ALL; cpumask_setall(bad_irq_desc.affinity);
bad_irq_desc.cpu = smp_processor_id(); bad_irq_desc.cpu = smp_processor_id();
#endif #endif
init_arch_irq(); init_arch_irq();
...@@ -191,15 +196,16 @@ void migrate_irqs(void) ...@@ -191,15 +196,16 @@ void migrate_irqs(void)
struct irq_desc *desc = irq_desc + i; struct irq_desc *desc = irq_desc + i;
if (desc->cpu == cpu) { if (desc->cpu == cpu) {
unsigned int newcpu = any_online_cpu(desc->affinity); unsigned int newcpu = cpumask_any_and(desc->affinity,
cpu_online_mask);
if (newcpu == NR_CPUS) { if (newcpu >= nr_cpu_ids) {
if (printk_ratelimit()) if (printk_ratelimit())
printk(KERN_INFO "IRQ%u no longer affine to CPU%u\n", printk(KERN_INFO "IRQ%u no longer affine to CPU%u\n",
i, cpu); i, cpu);
cpus_setall(desc->affinity); cpumask_setall(desc->affinity);
newcpu = any_online_cpu(desc->affinity); newcpu = cpumask_any_and(desc->affinity,
cpu_online_mask);
} }
route_irq(desc, i, newcpu); route_irq(desc, i, newcpu);
......
...@@ -65,6 +65,7 @@ SECTIONS ...@@ -65,6 +65,7 @@ SECTIONS
#endif #endif
. = ALIGN(4096); . = ALIGN(4096);
__per_cpu_start = .; __per_cpu_start = .;
*(.data.percpu.page_aligned)
*(.data.percpu) *(.data.percpu)
*(.data.percpu.shared_aligned) *(.data.percpu.shared_aligned)
__per_cpu_end = .; __per_cpu_end = .;
......
...@@ -263,7 +263,7 @@ static void em_route_irq(int irq, unsigned int cpu) ...@@ -263,7 +263,7 @@ static void em_route_irq(int irq, unsigned int cpu)
const struct cpumask *mask = cpumask_of(cpu); const struct cpumask *mask = cpumask_of(cpu);
spin_lock_irq(&desc->lock); spin_lock_irq(&desc->lock);
desc->affinity = *mask; cpumask_copy(desc->affinity, mask);
desc->chip->set_affinity(irq, mask); desc->chip->set_affinity(irq, mask);
spin_unlock_irq(&desc->lock); spin_unlock_irq(&desc->lock);
} }
......
...@@ -181,7 +181,7 @@ source "kernel/Kconfig.preempt" ...@@ -181,7 +181,7 @@ source "kernel/Kconfig.preempt"
config QUICKLIST config QUICKLIST
def_bool y def_bool y
config HAVE_ARCH_BOOTMEM_NODE config HAVE_ARCH_BOOTMEM
def_bool n def_bool n
config ARCH_HAVE_MEMORY_PRESENT config ARCH_HAVE_MEMORY_PRESENT
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
#ifndef __ASM_AVR32_SWAB_H #ifndef __ASM_AVR32_SWAB_H
#define __ASM_AVR32_SWAB_H #define __ASM_AVR32_SWAB_H
#include <asm/types.h> #include <linux/types.h>
#include <linux/compiler.h> #include <linux/compiler.h>
#define __SWAB_64_THRU_32__ #define __SWAB_64_THRU_32__
......
...@@ -3,14 +3,4 @@ ...@@ -3,14 +3,4 @@
#include <asm-generic/percpu.h> #include <asm-generic/percpu.h>
#ifdef CONFIG_MODULES
#define PERCPU_MODULE_RESERVE 8192
#else
#define PERCPU_MODULE_RESERVE 0
#endif
#define PERCPU_ENOUGH_ROOM \
(ALIGN(__per_cpu_end - __per_cpu_start, SMP_CACHE_BYTES) + \
PERCPU_MODULE_RESERVE)
#endif /* __ARCH_BLACKFIN_PERCPU__ */ #endif /* __ARCH_BLACKFIN_PERCPU__ */
#ifndef _BLACKFIN_SWAB_H #ifndef _BLACKFIN_SWAB_H
#define _BLACKFIN_SWAB_H #define _BLACKFIN_SWAB_H
#include <asm/types.h> #include <linux/types.h>
#include <linux/compiler.h> #include <linux/compiler.h>
#if defined(__GNUC__) && !defined(__STRICT_ANSI__) || defined(__KERNEL__) #if defined(__GNUC__) && !defined(__STRICT_ANSI__) || defined(__KERNEL__)
......
...@@ -70,6 +70,11 @@ static struct irq_desc bad_irq_desc = { ...@@ -70,6 +70,11 @@ static struct irq_desc bad_irq_desc = {
#endif #endif
}; };
#ifdef CONFIG_CPUMASK_OFFSTACK
/* We are not allocating a variable-sized bad_irq_desc.affinity */
#error "Blackfin architecture does not support CONFIG_CPUMASK_OFFSTACK."
#endif
int show_interrupts(struct seq_file *p, void *v) int show_interrupts(struct seq_file *p, void *v)
{ {
int i = *(loff_t *) v, j; int i = *(loff_t *) v, j;
......
#ifndef _H8300_SWAB_H #ifndef _H8300_SWAB_H
#define _H8300_SWAB_H #define _H8300_SWAB_H
#include <asm/types.h> #include <linux/types.h>
#if defined(__GNUC__) && !defined(__STRICT_ANSI__) || defined(__KERNEL__) #if defined(__GNUC__) && !defined(__STRICT_ANSI__) || defined(__KERNEL__)
# define __SWAB_64_THRU_32__ # define __SWAB_64_THRU_32__
......
...@@ -6,8 +6,6 @@ ...@@ -6,8 +6,6 @@
* David Mosberger-Tang <davidm@hpl.hp.com> * David Mosberger-Tang <davidm@hpl.hp.com>
*/ */
#include <asm/types.h>
/* floating point status register: */ /* floating point status register: */
#define FPSR_TRAP_VD (1 << 0) /* invalid op trap disabled */ #define FPSR_TRAP_VD (1 << 0) /* invalid op trap disabled */
#define FPSR_TRAP_DD (1 << 1) /* denormal trap disabled */ #define FPSR_TRAP_DD (1 << 1) /* denormal trap disabled */
......
...@@ -6,6 +6,7 @@ ...@@ -6,6 +6,7 @@
* Copyright (C) 2002,2003 Suresh Siddha <suresh.b.siddha@intel.com> * Copyright (C) 2002,2003 Suresh Siddha <suresh.b.siddha@intel.com>
*/ */
#include <linux/types.h>
#include <linux/compiler.h> #include <linux/compiler.h>
/* define this macro to get some asm stmts included in 'c' files */ /* define this macro to get some asm stmts included in 'c' files */
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/types.h>
/* include compiler specific intrinsics */ /* include compiler specific intrinsics */
#include <asm/ia64regs.h> #include <asm/ia64regs.h>
#ifdef __INTEL_COMPILER #ifdef __INTEL_COMPILER
......
...@@ -21,8 +21,7 @@ ...@@ -21,8 +21,7 @@
* *
*/ */
#include <asm/types.h> #include <linux/types.h>
#include <linux/ioctl.h> #include <linux/ioctl.h>
/* Select x86 specific features in <linux/kvm.h> */ /* Select x86 specific features in <linux/kvm.h> */
......
...@@ -27,12 +27,12 @@ extern void *per_cpu_init(void); ...@@ -27,12 +27,12 @@ extern void *per_cpu_init(void);
#else /* ! SMP */ #else /* ! SMP */
#define PER_CPU_ATTRIBUTES __attribute__((__section__(".data.percpu")))
#define per_cpu_init() (__phys_per_cpu_start) #define per_cpu_init() (__phys_per_cpu_start)
#endif /* SMP */ #endif /* SMP */
#define PER_CPU_BASE_SECTION ".data.percpu"
/* /*
* Be extremely careful when taking the address of this variable! Due to virtual * Be extremely careful when taking the address of this variable! Due to virtual
* remapping, it is different from the canonical address returned by __get_cpu_var(var)! * remapping, it is different from the canonical address returned by __get_cpu_var(var)!
......
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
* David Mosberger-Tang <davidm@hpl.hp.com>, Hewlett-Packard Co. * David Mosberger-Tang <davidm@hpl.hp.com>, Hewlett-Packard Co.
*/ */
#include <asm/types.h> #include <linux/types.h>
#include <asm/intrinsics.h> #include <asm/intrinsics.h>
#include <linux/compiler.h> #include <linux/compiler.h>
......
...@@ -84,7 +84,7 @@ void build_cpu_to_node_map(void); ...@@ -84,7 +84,7 @@ void build_cpu_to_node_map(void);
.child = NULL, \ .child = NULL, \
.groups = NULL, \ .groups = NULL, \
.min_interval = 8, \ .min_interval = 8, \
.max_interval = 8*(min(num_online_cpus(), 32)), \ .max_interval = 8*(min(num_online_cpus(), 32U)), \
.busy_factor = 64, \ .busy_factor = 64, \
.imbalance_pct = 125, \ .imbalance_pct = 125, \
.cache_nice_tries = 2, \ .cache_nice_tries = 2, \
......
#ifndef _ASM_IA64_UV_UV_H
#define _ASM_IA64_UV_UV_H
#include <asm/system.h>
#include <asm/sn/simulator.h>
static inline int is_uv_system(void)
{
/* temporary support for running on hardware simulator */
return IS_MEDUSA() || ia64_platform_is("uv");
}
#endif /* _ASM_IA64_UV_UV_H */
...@@ -199,6 +199,10 @@ char *__init __acpi_map_table(unsigned long phys_addr, unsigned long size) ...@@ -199,6 +199,10 @@ char *__init __acpi_map_table(unsigned long phys_addr, unsigned long size)
return __va(phys_addr); return __va(phys_addr);
} }
void __init __acpi_unmap_table(char *map, unsigned long size)
{
}
/* -------------------------------------------------------------------------- /* --------------------------------------------------------------------------
Boot-time Table Parsing Boot-time Table Parsing
-------------------------------------------------------------------------- */ -------------------------------------------------------------------------- */
......
...@@ -880,7 +880,7 @@ iosapic_unregister_intr (unsigned int gsi) ...@@ -880,7 +880,7 @@ iosapic_unregister_intr (unsigned int gsi)
if (iosapic_intr_info[irq].count == 0) { if (iosapic_intr_info[irq].count == 0) {
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
/* Clear affinity */ /* Clear affinity */
cpus_setall(idesc->affinity); cpumask_setall(idesc->affinity);
#endif #endif
/* Clear the interrupt information */ /* Clear the interrupt information */
iosapic_intr_info[irq].dest = 0; iosapic_intr_info[irq].dest = 0;
......
...@@ -103,7 +103,7 @@ static char irq_redir [NR_IRQS]; // = { [0 ... NR_IRQS-1] = 1 }; ...@@ -103,7 +103,7 @@ static char irq_redir [NR_IRQS]; // = { [0 ... NR_IRQS-1] = 1 };
void set_irq_affinity_info (unsigned int irq, int hwid, int redir) void set_irq_affinity_info (unsigned int irq, int hwid, int redir)
{ {
if (irq < NR_IRQS) { if (irq < NR_IRQS) {
cpumask_copy(&irq_desc[irq].affinity, cpumask_copy(irq_desc[irq].affinity,
cpumask_of(cpu_logical_id(hwid))); cpumask_of(cpu_logical_id(hwid)));
irq_redir[irq] = (char) (redir & 0xff); irq_redir[irq] = (char) (redir & 0xff);
} }
...@@ -148,7 +148,7 @@ static void migrate_irqs(void) ...@@ -148,7 +148,7 @@ static void migrate_irqs(void)
if (desc->status == IRQ_PER_CPU) if (desc->status == IRQ_PER_CPU)
continue; continue;
if (cpumask_any_and(&irq_desc[irq].affinity, cpu_online_mask) if (cpumask_any_and(irq_desc[irq].affinity, cpu_online_mask)
>= nr_cpu_ids) { >= nr_cpu_ids) {
/* /*
* Save it for phase 2 processing * Save it for phase 2 processing
......
...@@ -493,11 +493,13 @@ ia64_handle_irq (ia64_vector vector, struct pt_regs *regs) ...@@ -493,11 +493,13 @@ ia64_handle_irq (ia64_vector vector, struct pt_regs *regs)
saved_tpr = ia64_getreg(_IA64_REG_CR_TPR); saved_tpr = ia64_getreg(_IA64_REG_CR_TPR);
ia64_srlz_d(); ia64_srlz_d();
while (vector != IA64_SPURIOUS_INT_VECTOR) { while (vector != IA64_SPURIOUS_INT_VECTOR) {
struct irq_desc *desc = irq_to_desc(vector);
if (unlikely(IS_LOCAL_TLB_FLUSH(vector))) { if (unlikely(IS_LOCAL_TLB_FLUSH(vector))) {
smp_local_flush_tlb(); smp_local_flush_tlb();
kstat_this_cpu.irqs[vector]++; kstat_incr_irqs_this_cpu(vector, desc);
} else if (unlikely(IS_RESCHEDULE(vector))) } else if (unlikely(IS_RESCHEDULE(vector)))
kstat_this_cpu.irqs[vector]++; kstat_incr_irqs_this_cpu(vector, desc);
else { else {
int irq = local_vector_to_irq(vector); int irq = local_vector_to_irq(vector);
...@@ -551,11 +553,13 @@ void ia64_process_pending_intr(void) ...@@ -551,11 +553,13 @@ void ia64_process_pending_intr(void)
* Perform normal interrupt style processing * Perform normal interrupt style processing
*/ */
while (vector != IA64_SPURIOUS_INT_VECTOR) { while (vector != IA64_SPURIOUS_INT_VECTOR) {
struct irq_desc *desc = irq_to_desc(vector);
if (unlikely(IS_LOCAL_TLB_FLUSH(vector))) { if (unlikely(IS_LOCAL_TLB_FLUSH(vector))) {
smp_local_flush_tlb(); smp_local_flush_tlb();
kstat_this_cpu.irqs[vector]++; kstat_incr_irqs_this_cpu(vector, desc);
} else if (unlikely(IS_RESCHEDULE(vector))) } else if (unlikely(IS_RESCHEDULE(vector)))
kstat_this_cpu.irqs[vector]++; kstat_incr_irqs_this_cpu(vector, desc);
else { else {
struct pt_regs *old_regs = set_irq_regs(NULL); struct pt_regs *old_regs = set_irq_regs(NULL);
int irq = local_vector_to_irq(vector); int irq = local_vector_to_irq(vector);
......
...@@ -75,7 +75,7 @@ static void ia64_set_msi_irq_affinity(unsigned int irq, ...@@ -75,7 +75,7 @@ static void ia64_set_msi_irq_affinity(unsigned int irq,
msg.data = data; msg.data = data;
write_msi_msg(irq, &msg); write_msi_msg(irq, &msg);
irq_desc[irq].affinity = cpumask_of_cpu(cpu); cpumask_copy(irq_desc[irq].affinity, cpumask_of(cpu));
} }
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
...@@ -187,7 +187,7 @@ static void dmar_msi_set_affinity(unsigned int irq, const struct cpumask *mask) ...@@ -187,7 +187,7 @@ static void dmar_msi_set_affinity(unsigned int irq, const struct cpumask *mask)
msg.address_lo |= MSI_ADDR_DESTID_CPU(cpu_physical_id(cpu)); msg.address_lo |= MSI_ADDR_DESTID_CPU(cpu_physical_id(cpu));
dmar_msi_write(irq, &msg); dmar_msi_write(irq, &msg);
irq_desc[irq].affinity = *mask; cpumask_copy(irq_desc[irq].affinity, mask);
} }
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
......
...@@ -219,6 +219,7 @@ SECTIONS ...@@ -219,6 +219,7 @@ SECTIONS
.data.percpu PERCPU_ADDR : AT(__phys_per_cpu_start - LOAD_OFFSET) .data.percpu PERCPU_ADDR : AT(__phys_per_cpu_start - LOAD_OFFSET)
{ {
__per_cpu_start = .; __per_cpu_start = .;
*(.data.percpu.page_aligned)
*(.data.percpu) *(.data.percpu)
*(.data.percpu.shared_aligned) *(.data.percpu.shared_aligned)
__per_cpu_end = .; __per_cpu_end = .;
......
...@@ -205,7 +205,7 @@ static void sn_set_msi_irq_affinity(unsigned int irq, ...@@ -205,7 +205,7 @@ static void sn_set_msi_irq_affinity(unsigned int irq,
msg.address_lo = (u32)(bus_addr & 0x00000000ffffffff); msg.address_lo = (u32)(bus_addr & 0x00000000ffffffff);
write_msi_msg(irq, &msg); write_msi_msg(irq, &msg);
irq_desc[irq].affinity = *cpu_mask; cpumask_copy(irq_desc[irq].affinity, cpu_mask);
} }
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
......
...@@ -66,7 +66,7 @@ extern void smtc_forward_irq(unsigned int irq); ...@@ -66,7 +66,7 @@ extern void smtc_forward_irq(unsigned int irq);
*/ */
#define IRQ_AFFINITY_HOOK(irq) \ #define IRQ_AFFINITY_HOOK(irq) \
do { \ do { \
if (!cpu_isset(smp_processor_id(), irq_desc[irq].affinity)) { \ if (!cpumask_test_cpu(smp_processor_id(), irq_desc[irq].affinity)) {\
smtc_forward_irq(irq); \ smtc_forward_irq(irq); \
irq_exit(); \ irq_exit(); \
return; \ return; \
......
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
#ifndef _ASM_SIGCONTEXT_H #ifndef _ASM_SIGCONTEXT_H
#define _ASM_SIGCONTEXT_H #define _ASM_SIGCONTEXT_H
#include <linux/types.h>
#include <asm/sgidefs.h> #include <asm/sgidefs.h>
#if _MIPS_SIM == _MIPS_SIM_ABI32 #if _MIPS_SIM == _MIPS_SIM_ABI32
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
#define _ASM_SWAB_H #define _ASM_SWAB_H
#include <linux/compiler.h> #include <linux/compiler.h>
#include <asm/types.h> #include <linux/types.h>
#define __SWAB_64_THRU_32__ #define __SWAB_64_THRU_32__
......
...@@ -187,7 +187,7 @@ static void gic_set_affinity(unsigned int irq, const struct cpumask *cpumask) ...@@ -187,7 +187,7 @@ static void gic_set_affinity(unsigned int irq, const struct cpumask *cpumask)
set_bit(irq, pcpu_masks[first_cpu(tmp)].pcpu_mask); set_bit(irq, pcpu_masks[first_cpu(tmp)].pcpu_mask);
} }
irq_desc[irq].affinity = *cpumask; cpumask_copy(irq_desc[irq].affinity, cpumask);
spin_unlock_irqrestore(&gic_lock, flags); spin_unlock_irqrestore(&gic_lock, flags);
} }
......
...@@ -686,7 +686,7 @@ void smtc_forward_irq(unsigned int irq) ...@@ -686,7 +686,7 @@ void smtc_forward_irq(unsigned int irq)
* and efficiency, we just pick the easiest one to find. * and efficiency, we just pick the easiest one to find.
*/ */
target = first_cpu(irq_desc[irq].affinity); target = cpumask_first(irq_desc[irq].affinity);
/* /*
* We depend on the platform code to have correctly processed * We depend on the platform code to have correctly processed
...@@ -921,11 +921,13 @@ void ipi_decode(struct smtc_ipi *pipi) ...@@ -921,11 +921,13 @@ void ipi_decode(struct smtc_ipi *pipi)
struct clock_event_device *cd; struct clock_event_device *cd;
void *arg_copy = pipi->arg; void *arg_copy = pipi->arg;
int type_copy = pipi->type; int type_copy = pipi->type;
int irq = MIPS_CPU_IRQ_BASE + 1;
smtc_ipi_nq(&freeIPIq, pipi); smtc_ipi_nq(&freeIPIq, pipi);
switch (type_copy) { switch (type_copy) {
case SMTC_CLOCK_TICK: case SMTC_CLOCK_TICK:
irq_enter(); irq_enter();
kstat_this_cpu.irqs[MIPS_CPU_IRQ_BASE + 1]++; kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
cd = &per_cpu(mips_clockevent_device, cpu); cd = &per_cpu(mips_clockevent_device, cpu);
cd->event_handler(cd); cd->event_handler(cd);
irq_exit(); irq_exit();
......
...@@ -116,7 +116,7 @@ struct plat_smp_ops msmtc_smp_ops = { ...@@ -116,7 +116,7 @@ struct plat_smp_ops msmtc_smp_ops = {
void plat_set_irq_affinity(unsigned int irq, const struct cpumask *affinity) void plat_set_irq_affinity(unsigned int irq, const struct cpumask *affinity)
{ {
cpumask_t tmask = *affinity; cpumask_t tmask;
int cpu = 0; int cpu = 0;
void smtc_set_irq_affinity(unsigned int irq, cpumask_t aff); void smtc_set_irq_affinity(unsigned int irq, cpumask_t aff);
...@@ -139,11 +139,12 @@ void plat_set_irq_affinity(unsigned int irq, const struct cpumask *affinity) ...@@ -139,11 +139,12 @@ void plat_set_irq_affinity(unsigned int irq, const struct cpumask *affinity)
* be made to forward to an offline "CPU". * be made to forward to an offline "CPU".
*/ */
cpumask_copy(&tmask, affinity);
for_each_cpu(cpu, affinity) { for_each_cpu(cpu, affinity) {
if ((cpu_data[cpu].vpe_id != 0) || !cpu_online(cpu)) if ((cpu_data[cpu].vpe_id != 0) || !cpu_online(cpu))
cpu_clear(cpu, tmask); cpu_clear(cpu, tmask);
} }
irq_desc[irq].affinity = tmask; cpumask_copy(irq_desc[irq].affinity, &tmask);
if (cpus_empty(tmask)) if (cpus_empty(tmask))
/* /*
......
...@@ -155,7 +155,7 @@ static void indy_buserror_irq(void) ...@@ -155,7 +155,7 @@ static void indy_buserror_irq(void)
int irq = SGI_BUSERR_IRQ; int irq = SGI_BUSERR_IRQ;
irq_enter(); irq_enter();
kstat_this_cpu.irqs[irq]++; kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
ip22_be_interrupt(irq); ip22_be_interrupt(irq);
irq_exit(); irq_exit();
} }
......
...@@ -122,7 +122,7 @@ void indy_8254timer_irq(void) ...@@ -122,7 +122,7 @@ void indy_8254timer_irq(void)
char c; char c;
irq_enter(); irq_enter();
kstat_this_cpu.irqs[irq]++; kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
printk(KERN_ALERT "Oops, got 8254 interrupt.\n"); printk(KERN_ALERT "Oops, got 8254 interrupt.\n");
ArcRead(0, &c, 1, &cnt); ArcRead(0, &c, 1, &cnt);
ArcEnterInteractiveMode(); ArcEnterInteractiveMode();
......
...@@ -178,9 +178,10 @@ struct plat_smp_ops bcm1480_smp_ops = { ...@@ -178,9 +178,10 @@ struct plat_smp_ops bcm1480_smp_ops = {
void bcm1480_mailbox_interrupt(void) void bcm1480_mailbox_interrupt(void)
{ {
int cpu = smp_processor_id(); int cpu = smp_processor_id();
int irq = K_BCM1480_INT_MBOX_0_0;
unsigned int action; unsigned int action;
kstat_this_cpu.irqs[K_BCM1480_INT_MBOX_0_0]++; kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
/* Load the mailbox register to figure out what we're supposed to do */ /* Load the mailbox register to figure out what we're supposed to do */
action = (__raw_readq(mailbox_0_regs[cpu]) >> 48) & 0xffff; action = (__raw_readq(mailbox_0_regs[cpu]) >> 48) & 0xffff;
......
...@@ -166,9 +166,10 @@ struct plat_smp_ops sb_smp_ops = { ...@@ -166,9 +166,10 @@ struct plat_smp_ops sb_smp_ops = {
void sb1250_mailbox_interrupt(void) void sb1250_mailbox_interrupt(void)
{ {
int cpu = smp_processor_id(); int cpu = smp_processor_id();
int irq = K_INT_MBOX_0;
unsigned int action; unsigned int action;
kstat_this_cpu.irqs[K_INT_MBOX_0]++; kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
/* Load the mailbox register to figure out what we're supposed to do */ /* Load the mailbox register to figure out what we're supposed to do */
action = (____raw_readq(mailbox_regs[cpu]) >> 48) & 0xffff; action = (____raw_readq(mailbox_regs[cpu]) >> 48) & 0xffff;
......
...@@ -130,6 +130,7 @@ void watchdog_interrupt(struct pt_regs *regs, enum exception_code excep) ...@@ -130,6 +130,7 @@ void watchdog_interrupt(struct pt_regs *regs, enum exception_code excep)
* the stack NMI-atomically, it's safe to use smp_processor_id(). * the stack NMI-atomically, it's safe to use smp_processor_id().
*/ */
int sum, cpu = smp_processor_id(); int sum, cpu = smp_processor_id();
int irq = NMIIRQ;
u8 wdt, tmp; u8 wdt, tmp;
wdt = WDCTR & ~WDCTR_WDCNE; wdt = WDCTR & ~WDCTR_WDCNE;
...@@ -138,7 +139,7 @@ void watchdog_interrupt(struct pt_regs *regs, enum exception_code excep) ...@@ -138,7 +139,7 @@ void watchdog_interrupt(struct pt_regs *regs, enum exception_code excep)
NMICR = NMICR_WDIF; NMICR = NMICR_WDIF;
nmi_count(cpu)++; nmi_count(cpu)++;
kstat_this_cpu.irqs[NMIIRQ]++; kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
sum = irq_stat[cpu].__irq_count; sum = irq_stat[cpu].__irq_count;
if (last_irq_sums[cpu] == sum) { if (last_irq_sums[cpu] == sum) {
......
...@@ -336,10 +336,11 @@ ...@@ -336,10 +336,11 @@
#define NUM_PDC_RESULT 32 #define NUM_PDC_RESULT 32
#if !defined(__ASSEMBLY__) #if !defined(__ASSEMBLY__)
#ifdef __KERNEL__
#include <linux/types.h> #include <linux/types.h>
#ifdef __KERNEL__
extern int pdc_type; extern int pdc_type;
/* Values for pdc_type */ /* Values for pdc_type */
......
#ifndef _PARISC_SWAB_H #ifndef _PARISC_SWAB_H
#define _PARISC_SWAB_H #define _PARISC_SWAB_H
#include <asm/types.h> #include <linux/types.h>
#include <linux/compiler.h> #include <linux/compiler.h>
#define __SWAB_64_THRU_32__ #define __SWAB_64_THRU_32__
......
...@@ -138,7 +138,7 @@ static void cpu_set_affinity_irq(unsigned int irq, const struct cpumask *dest) ...@@ -138,7 +138,7 @@ static void cpu_set_affinity_irq(unsigned int irq, const struct cpumask *dest)
if (cpu_dest < 0) if (cpu_dest < 0)
return; return;
cpumask_copy(&irq_desc[irq].affinity, &cpumask_of_cpu(cpu_dest)); cpumask_copy(&irq_desc[irq].affinity, dest);
} }
#endif #endif
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
#ifndef __ASM_BOOTX_H__ #ifndef __ASM_BOOTX_H__
#define __ASM_BOOTX_H__ #define __ASM_BOOTX_H__
#include <asm/types.h> #include <linux/types.h>
#ifdef macintosh #ifdef macintosh
#include <Types.h> #include <Types.h>
......
...@@ -7,7 +7,7 @@ ...@@ -7,7 +7,7 @@
#include <asm/string.h> #include <asm/string.h>
#endif #endif
#include <asm/types.h> #include <linux/types.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/cputable.h> #include <asm/cputable.h>
#include <asm/auxvec.h> #include <asm/auxvec.h>
......
...@@ -20,7 +20,7 @@ ...@@ -20,7 +20,7 @@
#ifndef __LINUX_KVM_POWERPC_H #ifndef __LINUX_KVM_POWERPC_H
#define __LINUX_KVM_POWERPC_H #define __LINUX_KVM_POWERPC_H
#include <asm/types.h> #include <linux/types.h>
struct kvm_regs { struct kvm_regs {
__u64 pc; __u64 pc;
......
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#define _ASM_MMZONE_H_ #define _ASM_MMZONE_H_
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <linux/cpumask.h>
/* /*
* generic non-linear memory support: * generic non-linear memory support:
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
#ifndef _ASM_POWERPC_PS3FB_H_ #ifndef _ASM_POWERPC_PS3FB_H_
#define _ASM_POWERPC_PS3FB_H_ #define _ASM_POWERPC_PS3FB_H_
#include <linux/types.h>
#include <linux/ioctl.h> #include <linux/ioctl.h>
/* ioctl */ /* ioctl */
......
...@@ -23,9 +23,10 @@ ...@@ -23,9 +23,10 @@
#ifndef _SPU_INFO_H #ifndef _SPU_INFO_H
#define _SPU_INFO_H #define _SPU_INFO_H
#include <linux/types.h>
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <asm/spu.h> #include <asm/spu.h>
#include <linux/types.h>
#else #else
struct mfc_cq_sr { struct mfc_cq_sr {
__u64 mfc_cq_data0_RW; __u64 mfc_cq_data0_RW;
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
* 2 of the License, or (at your option) any later version. * 2 of the License, or (at your option) any later version.
*/ */
#include <asm/types.h> #include <linux/types.h>
#include <linux/compiler.h> #include <linux/compiler.h>
#ifdef __GNUC__ #ifdef __GNUC__
......
...@@ -231,7 +231,7 @@ void fixup_irqs(cpumask_t map) ...@@ -231,7 +231,7 @@ void fixup_irqs(cpumask_t map)
if (irq_desc[irq].status & IRQ_PER_CPU) if (irq_desc[irq].status & IRQ_PER_CPU)
continue; continue;
cpus_and(mask, irq_desc[irq].affinity, map); cpumask_and(&mask, irq_desc[irq].affinity, &map);
if (any_online_cpu(mask) == NR_CPUS) { if (any_online_cpu(mask) == NR_CPUS) {
printk("Breaking affinity for irq %i\n", irq); printk("Breaking affinity for irq %i\n", irq);
mask = map; mask = map;
......
...@@ -184,6 +184,7 @@ SECTIONS ...@@ -184,6 +184,7 @@ SECTIONS
. = ALIGN(PAGE_SIZE); . = ALIGN(PAGE_SIZE);
.data.percpu : AT(ADDR(.data.percpu) - LOAD_OFFSET) { .data.percpu : AT(ADDR(.data.percpu) - LOAD_OFFSET) {
__per_cpu_start = .; __per_cpu_start = .;
*(.data.percpu.page_aligned)
*(.data.percpu) *(.data.percpu)
*(.data.percpu.shared_aligned) *(.data.percpu.shared_aligned)
__per_cpu_end = .; __per_cpu_end = .;
......
...@@ -153,9 +153,10 @@ static int get_irq_server(unsigned int virq, unsigned int strict_check) ...@@ -153,9 +153,10 @@ static int get_irq_server(unsigned int virq, unsigned int strict_check)
{ {
int server; int server;
/* For the moment only implement delivery to all cpus or one cpu */ /* For the moment only implement delivery to all cpus or one cpu */
cpumask_t cpumask = irq_desc[virq].affinity; cpumask_t cpumask;
cpumask_t tmp = CPU_MASK_NONE; cpumask_t tmp = CPU_MASK_NONE;
cpumask_copy(&cpumask, irq_desc[virq].affinity);
if (!distribute_irqs) if (!distribute_irqs)
return default_server; return default_server;
...@@ -869,7 +870,7 @@ void xics_migrate_irqs_away(void) ...@@ -869,7 +870,7 @@ void xics_migrate_irqs_away(void)
virq, cpu); virq, cpu);
/* Reset affinity to all cpus */ /* Reset affinity to all cpus */
irq_desc[virq].affinity = CPU_MASK_ALL; cpumask_setall(irq_desc[virq].affinity);
desc->chip->set_affinity(virq, cpu_all_mask); desc->chip->set_affinity(virq, cpu_all_mask);
unlock: unlock:
spin_unlock_irqrestore(&desc->lock, flags); spin_unlock_irqrestore(&desc->lock, flags);
......
...@@ -566,9 +566,10 @@ static void __init mpic_scan_ht_pics(struct mpic *mpic) ...@@ -566,9 +566,10 @@ static void __init mpic_scan_ht_pics(struct mpic *mpic)
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
static int irq_choose_cpu(unsigned int virt_irq) static int irq_choose_cpu(unsigned int virt_irq)
{ {
cpumask_t mask = irq_desc[virt_irq].affinity; cpumask_t mask;
int cpuid; int cpuid;
cpumask_copy(&mask, irq_desc[virt_irq].affinity);
if (cpus_equal(mask, CPU_MASK_ALL)) { if (cpus_equal(mask, CPU_MASK_ALL)) {
static int irq_rover; static int irq_rover;
static DEFINE_SPINLOCK(irq_rover_lock); static DEFINE_SPINLOCK(irq_rover_lock);
......
...@@ -3,6 +3,8 @@ ...@@ -3,6 +3,8 @@
#ifdef CONFIG_NEED_MULTIPLE_NODES #ifdef CONFIG_NEED_MULTIPLE_NODES
#include <linux/cpumask.h>
extern struct pglist_data *node_data[]; extern struct pglist_data *node_data[];
#define NODE_DATA(nid) (node_data[nid]) #define NODE_DATA(nid) (node_data[nid])
......
...@@ -252,9 +252,10 @@ struct irq_handler_data { ...@@ -252,9 +252,10 @@ struct irq_handler_data {
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
static int irq_choose_cpu(unsigned int virt_irq) static int irq_choose_cpu(unsigned int virt_irq)
{ {
cpumask_t mask = irq_desc[virt_irq].affinity; cpumask_t mask;
int cpuid; int cpuid;
cpumask_copy(&mask, irq_desc[virt_irq].affinity);
if (cpus_equal(mask, CPU_MASK_ALL)) { if (cpus_equal(mask, CPU_MASK_ALL)) {
static int irq_rover; static int irq_rover;
static DEFINE_SPINLOCK(irq_rover_lock); static DEFINE_SPINLOCK(irq_rover_lock);
...@@ -805,7 +806,7 @@ void fixup_irqs(void) ...@@ -805,7 +806,7 @@ void fixup_irqs(void)
!(irq_desc[irq].status & IRQ_PER_CPU)) { !(irq_desc[irq].status & IRQ_PER_CPU)) {
if (irq_desc[irq].chip->set_affinity) if (irq_desc[irq].chip->set_affinity)
irq_desc[irq].chip->set_affinity(irq, irq_desc[irq].chip->set_affinity(irq,
&irq_desc[irq].affinity); irq_desc[irq].affinity);
} }
spin_unlock_irqrestore(&irq_desc[irq].lock, flags); spin_unlock_irqrestore(&irq_desc[irq].lock, flags);
} }
......
...@@ -729,7 +729,7 @@ void timer_interrupt(int irq, struct pt_regs *regs) ...@@ -729,7 +729,7 @@ void timer_interrupt(int irq, struct pt_regs *regs)
irq_enter(); irq_enter();
kstat_this_cpu.irqs[0]++; kstat_incr_irqs_this_cpu(0, irq_to_desc(0));
if (unlikely(!evt->event_handler)) { if (unlikely(!evt->event_handler)) {
printk(KERN_WARNING printk(KERN_WARNING
......
This diff is collapsed.
This diff is collapsed.
...@@ -7,7 +7,7 @@ source "lib/Kconfig.debug" ...@@ -7,7 +7,7 @@ source "lib/Kconfig.debug"
config STRICT_DEVMEM config STRICT_DEVMEM
bool "Filter access to /dev/mem" bool "Filter access to /dev/mem"
help ---help---
If this option is disabled, you allow userspace (root) access to all If this option is disabled, you allow userspace (root) access to all
of memory, including kernel and userspace memory. Accidental of memory, including kernel and userspace memory. Accidental
access to this is obviously disastrous, but specific access can access to this is obviously disastrous, but specific access can
...@@ -25,7 +25,7 @@ config STRICT_DEVMEM ...@@ -25,7 +25,7 @@ config STRICT_DEVMEM
config X86_VERBOSE_BOOTUP config X86_VERBOSE_BOOTUP
bool "Enable verbose x86 bootup info messages" bool "Enable verbose x86 bootup info messages"
default y default y
help ---help---
Enables the informational output from the decompression stage Enables the informational output from the decompression stage
(e.g. bzImage) of the boot. If you disable this you will still (e.g. bzImage) of the boot. If you disable this you will still
see errors. Disable this if you want silent bootup. see errors. Disable this if you want silent bootup.
...@@ -33,7 +33,7 @@ config X86_VERBOSE_BOOTUP ...@@ -33,7 +33,7 @@ config X86_VERBOSE_BOOTUP
config EARLY_PRINTK config EARLY_PRINTK
bool "Early printk" if EMBEDDED bool "Early printk" if EMBEDDED
default y default y
help ---help---
Write kernel log output directly into the VGA buffer or to a serial Write kernel log output directly into the VGA buffer or to a serial
port. port.
...@@ -47,7 +47,7 @@ config EARLY_PRINTK_DBGP ...@@ -47,7 +47,7 @@ config EARLY_PRINTK_DBGP
bool "Early printk via EHCI debug port" bool "Early printk via EHCI debug port"
default n default n
depends on EARLY_PRINTK && PCI depends on EARLY_PRINTK && PCI
help ---help---
Write kernel log output directly into the EHCI debug port. Write kernel log output directly into the EHCI debug port.
This is useful for kernel debugging when your machine crashes very This is useful for kernel debugging when your machine crashes very
...@@ -59,14 +59,14 @@ config EARLY_PRINTK_DBGP ...@@ -59,14 +59,14 @@ config EARLY_PRINTK_DBGP
config DEBUG_STACKOVERFLOW config DEBUG_STACKOVERFLOW
bool "Check for stack overflows" bool "Check for stack overflows"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
help ---help---
This option will cause messages to be printed if free stack space This option will cause messages to be printed if free stack space
drops below a certain limit. drops below a certain limit.
config DEBUG_STACK_USAGE config DEBUG_STACK_USAGE
bool "Stack utilization instrumentation" bool "Stack utilization instrumentation"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
help ---help---
Enables the display of the minimum amount of free stack which each Enables the display of the minimum amount of free stack which each
task has ever had available in the sysrq-T and sysrq-P debug output. task has ever had available in the sysrq-T and sysrq-P debug output.
...@@ -75,7 +75,7 @@ config DEBUG_STACK_USAGE ...@@ -75,7 +75,7 @@ config DEBUG_STACK_USAGE
config DEBUG_PAGEALLOC config DEBUG_PAGEALLOC
bool "Debug page memory allocations" bool "Debug page memory allocations"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
help ---help---
Unmap pages from the kernel linear mapping after free_pages(). Unmap pages from the kernel linear mapping after free_pages().
This results in a large slowdown, but helps to find certain types This results in a large slowdown, but helps to find certain types
of memory corruptions. of memory corruptions.
...@@ -83,9 +83,9 @@ config DEBUG_PAGEALLOC ...@@ -83,9 +83,9 @@ config DEBUG_PAGEALLOC
config DEBUG_PER_CPU_MAPS config DEBUG_PER_CPU_MAPS
bool "Debug access to per_cpu maps" bool "Debug access to per_cpu maps"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
depends on X86_SMP depends on SMP
default n default n
help ---help---
Say Y to verify that the per_cpu map being accessed has Say Y to verify that the per_cpu map being accessed has
been setup. Adds a fair amount of code to kernel memory been setup. Adds a fair amount of code to kernel memory
and decreases performance. and decreases performance.
...@@ -96,7 +96,7 @@ config X86_PTDUMP ...@@ -96,7 +96,7 @@ config X86_PTDUMP
bool "Export kernel pagetable layout to userspace via debugfs" bool "Export kernel pagetable layout to userspace via debugfs"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
select DEBUG_FS select DEBUG_FS
help ---help---
Say Y here if you want to show the kernel pagetable layout in a Say Y here if you want to show the kernel pagetable layout in a
debugfs file. This information is only useful for kernel developers debugfs file. This information is only useful for kernel developers
who are working in architecture specific areas of the kernel. who are working in architecture specific areas of the kernel.
...@@ -108,7 +108,7 @@ config DEBUG_RODATA ...@@ -108,7 +108,7 @@ config DEBUG_RODATA
bool "Write protect kernel read-only data structures" bool "Write protect kernel read-only data structures"
default y default y
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
help ---help---
Mark the kernel read-only data as write-protected in the pagetables, Mark the kernel read-only data as write-protected in the pagetables,
in order to catch accidental (and incorrect) writes to such const in order to catch accidental (and incorrect) writes to such const
data. This is recommended so that we can catch kernel bugs sooner. data. This is recommended so that we can catch kernel bugs sooner.
...@@ -117,7 +117,8 @@ config DEBUG_RODATA ...@@ -117,7 +117,8 @@ config DEBUG_RODATA
config DEBUG_RODATA_TEST config DEBUG_RODATA_TEST
bool "Testcase for the DEBUG_RODATA feature" bool "Testcase for the DEBUG_RODATA feature"
depends on DEBUG_RODATA depends on DEBUG_RODATA
help default y
---help---
This option enables a testcase for the DEBUG_RODATA This option enables a testcase for the DEBUG_RODATA
feature as well as for the change_page_attr() infrastructure. feature as well as for the change_page_attr() infrastructure.
If in doubt, say "N" If in doubt, say "N"
...@@ -125,7 +126,7 @@ config DEBUG_RODATA_TEST ...@@ -125,7 +126,7 @@ config DEBUG_RODATA_TEST
config DEBUG_NX_TEST config DEBUG_NX_TEST
tristate "Testcase for the NX non-executable stack feature" tristate "Testcase for the NX non-executable stack feature"
depends on DEBUG_KERNEL && m depends on DEBUG_KERNEL && m
help ---help---
This option enables a testcase for the CPU NX capability This option enables a testcase for the CPU NX capability
and the software setup of this feature. and the software setup of this feature.
If in doubt, say "N" If in doubt, say "N"
...@@ -133,7 +134,7 @@ config DEBUG_NX_TEST ...@@ -133,7 +134,7 @@ config DEBUG_NX_TEST
config 4KSTACKS config 4KSTACKS
bool "Use 4Kb for kernel stacks instead of 8Kb" bool "Use 4Kb for kernel stacks instead of 8Kb"
depends on X86_32 depends on X86_32
help ---help---
If you say Y here the kernel will use a 4Kb stacksize for the If you say Y here the kernel will use a 4Kb stacksize for the
kernel stack attached to each process/thread. This facilitates kernel stack attached to each process/thread. This facilitates
running more threads on a system and also reduces the pressure running more threads on a system and also reduces the pressure
...@@ -144,7 +145,7 @@ config DOUBLEFAULT ...@@ -144,7 +145,7 @@ config DOUBLEFAULT
default y default y
bool "Enable doublefault exception handler" if EMBEDDED bool "Enable doublefault exception handler" if EMBEDDED
depends on X86_32 depends on X86_32
help ---help---
This option allows trapping of rare doublefault exceptions that This option allows trapping of rare doublefault exceptions that
would otherwise cause a system to silently reboot. Disabling this would otherwise cause a system to silently reboot. Disabling this
option saves about 4k and might cause you much additional grey option saves about 4k and might cause you much additional grey
...@@ -154,7 +155,7 @@ config IOMMU_DEBUG ...@@ -154,7 +155,7 @@ config IOMMU_DEBUG
bool "Enable IOMMU debugging" bool "Enable IOMMU debugging"
depends on GART_IOMMU && DEBUG_KERNEL depends on GART_IOMMU && DEBUG_KERNEL
depends on X86_64 depends on X86_64
help ---help---
Force the IOMMU to on even when you have less than 4GB of Force the IOMMU to on even when you have less than 4GB of
memory and add debugging code. On overflow always panic. And memory and add debugging code. On overflow always panic. And
allow to enable IOMMU leak tracing. Can be disabled at boot allow to enable IOMMU leak tracing. Can be disabled at boot
...@@ -170,7 +171,7 @@ config IOMMU_LEAK ...@@ -170,7 +171,7 @@ config IOMMU_LEAK
bool "IOMMU leak tracing" bool "IOMMU leak tracing"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
depends on IOMMU_DEBUG depends on IOMMU_DEBUG
help ---help---
Add a simple leak tracer to the IOMMU code. This is useful when you Add a simple leak tracer to the IOMMU code. This is useful when you
are debugging a buggy device driver that leaks IOMMU mappings. are debugging a buggy device driver that leaks IOMMU mappings.
...@@ -203,25 +204,25 @@ choice ...@@ -203,25 +204,25 @@ choice
config IO_DELAY_0X80 config IO_DELAY_0X80
bool "port 0x80 based port-IO delay [recommended]" bool "port 0x80 based port-IO delay [recommended]"
help ---help---
This is the traditional Linux IO delay used for in/out_p. This is the traditional Linux IO delay used for in/out_p.
It is the most tested hence safest selection here. It is the most tested hence safest selection here.
config IO_DELAY_0XED config IO_DELAY_0XED
bool "port 0xed based port-IO delay" bool "port 0xed based port-IO delay"
help ---help---
Use port 0xed as the IO delay. This frees up port 0x80 which is Use port 0xed as the IO delay. This frees up port 0x80 which is
often used as a hardware-debug port. often used as a hardware-debug port.
config IO_DELAY_UDELAY config IO_DELAY_UDELAY
bool "udelay based port-IO delay" bool "udelay based port-IO delay"
help ---help---
Use udelay(2) as the IO delay method. This provides the delay Use udelay(2) as the IO delay method. This provides the delay
while not having any side-effect on the IO port space. while not having any side-effect on the IO port space.
config IO_DELAY_NONE config IO_DELAY_NONE
bool "no port-IO delay" bool "no port-IO delay"
help ---help---
No port-IO delay. Will break on old boxes that require port-IO No port-IO delay. Will break on old boxes that require port-IO
delay for certain operations. Should work on most new machines. delay for certain operations. Should work on most new machines.
...@@ -255,18 +256,18 @@ config DEBUG_BOOT_PARAMS ...@@ -255,18 +256,18 @@ config DEBUG_BOOT_PARAMS
bool "Debug boot parameters" bool "Debug boot parameters"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
depends on DEBUG_FS depends on DEBUG_FS
help ---help---
This option will cause struct boot_params to be exported via debugfs. This option will cause struct boot_params to be exported via debugfs.
config CPA_DEBUG config CPA_DEBUG
bool "CPA self-test code" bool "CPA self-test code"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
help ---help---
Do change_page_attr() self-tests every 30 seconds. Do change_page_attr() self-tests every 30 seconds.
config OPTIMIZE_INLINING config OPTIMIZE_INLINING
bool "Allow gcc to uninline functions marked 'inline'" bool "Allow gcc to uninline functions marked 'inline'"
help ---help---
This option determines if the kernel forces gcc to inline the functions This option determines if the kernel forces gcc to inline the functions
developers have marked 'inline'. Doing so takes away freedom from gcc to developers have marked 'inline'. Doing so takes away freedom from gcc to
do what it thinks is best, which is desirable for the gcc 3.x series of do what it thinks is best, which is desirable for the gcc 3.x series of
...@@ -279,4 +280,3 @@ config OPTIMIZE_INLINING ...@@ -279,4 +280,3 @@ config OPTIMIZE_INLINING
If unsure, say N. If unsure, say N.
endmenu endmenu
...@@ -70,14 +70,17 @@ else ...@@ -70,14 +70,17 @@ else
# this works around some issues with generating unwind tables in older gccs # this works around some issues with generating unwind tables in older gccs
# newer gccs do it by default # newer gccs do it by default
KBUILD_CFLAGS += -maccumulate-outgoing-args KBUILD_CFLAGS += -maccumulate-outgoing-args
endif
stackp := $(CONFIG_SHELL) $(srctree)/scripts/gcc-x86_64-has-stack-protector.sh ifdef CONFIG_CC_STACKPROTECTOR
stackp-$(CONFIG_CC_STACKPROTECTOR) := $(shell $(stackp) \ cc_has_sp := $(srctree)/scripts/gcc-x86_$(BITS)-has-stack-protector.sh
"$(CC)" -fstack-protector ) ifeq ($(shell $(CONFIG_SHELL) $(cc_has_sp) $(CC)),y)
stackp-$(CONFIG_CC_STACKPROTECTOR_ALL) += $(shell $(stackp) \ stackp-y := -fstack-protector
"$(CC)" -fstack-protector-all ) stackp-$(CONFIG_CC_STACKPROTECTOR_ALL) += -fstack-protector-all
KBUILD_CFLAGS += $(stackp-y)
KBUILD_CFLAGS += $(stackp-y) else
$(warning stack protector enabled but no compiler support)
endif
endif endif
# Stackpointer is addressed different for 32 bit and 64 bit x86 # Stackpointer is addressed different for 32 bit and 64 bit x86
...@@ -102,29 +105,6 @@ KBUILD_CFLAGS += -fno-asynchronous-unwind-tables ...@@ -102,29 +105,6 @@ KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
# prevent gcc from generating any FP code by mistake # prevent gcc from generating any FP code by mistake
KBUILD_CFLAGS += $(call cc-option,-mno-sse -mno-mmx -mno-sse2 -mno-3dnow,) KBUILD_CFLAGS += $(call cc-option,-mno-sse -mno-mmx -mno-sse2 -mno-3dnow,)
###
# Sub architecture support
# fcore-y is linked before mcore-y files.
# Default subarch .c files
mcore-y := arch/x86/mach-default/
# Voyager subarch support
mflags-$(CONFIG_X86_VOYAGER) := -Iarch/x86/include/asm/mach-voyager
mcore-$(CONFIG_X86_VOYAGER) := arch/x86/mach-voyager/
# generic subarchitecture
mflags-$(CONFIG_X86_GENERICARCH):= -Iarch/x86/include/asm/mach-generic
fcore-$(CONFIG_X86_GENERICARCH) += arch/x86/mach-generic/
mcore-$(CONFIG_X86_GENERICARCH) := arch/x86/mach-default/
# default subarch .h files
mflags-y += -Iarch/x86/include/asm/mach-default
# 64 bit does not support subarch support - clear sub arch variables
fcore-$(CONFIG_X86_64) :=
mcore-$(CONFIG_X86_64) :=
KBUILD_CFLAGS += $(mflags-y) KBUILD_CFLAGS += $(mflags-y)
KBUILD_AFLAGS += $(mflags-y) KBUILD_AFLAGS += $(mflags-y)
...@@ -150,9 +130,6 @@ core-$(CONFIG_LGUEST_GUEST) += arch/x86/lguest/ ...@@ -150,9 +130,6 @@ core-$(CONFIG_LGUEST_GUEST) += arch/x86/lguest/
core-y += arch/x86/kernel/ core-y += arch/x86/kernel/
core-y += arch/x86/mm/ core-y += arch/x86/mm/
# Remaining sub architecture files
core-y += $(mcore-y)
core-y += arch/x86/crypto/ core-y += arch/x86/crypto/
core-y += arch/x86/vdso/ core-y += arch/x86/vdso/
core-$(CONFIG_IA32_EMULATION) += arch/x86/ia32/ core-$(CONFIG_IA32_EMULATION) += arch/x86/ia32/
......
...@@ -32,7 +32,6 @@ setup-y += a20.o cmdline.o copy.o cpu.o cpucheck.o edd.o ...@@ -32,7 +32,6 @@ setup-y += a20.o cmdline.o copy.o cpu.o cpucheck.o edd.o
setup-y += header.o main.o mca.o memory.o pm.o pmjump.o setup-y += header.o main.o mca.o memory.o pm.o pmjump.o
setup-y += printf.o string.o tty.o video.o video-mode.o version.o setup-y += printf.o string.o tty.o video.o video-mode.o version.o
setup-$(CONFIG_X86_APM_BOOT) += apm.o setup-$(CONFIG_X86_APM_BOOT) += apm.o
setup-$(CONFIG_X86_VOYAGER) += voyager.o
# The link order of the video-*.o modules can matter. In particular, # The link order of the video-*.o modules can matter. In particular,
# video-vga.o *must* be listed first, followed by video-vesa.o. # video-vga.o *must* be listed first, followed by video-vesa.o.
......
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
* *
* Copyright (C) 1991, 1992 Linus Torvalds * Copyright (C) 1991, 1992 Linus Torvalds
* Copyright 2007-2008 rPath, Inc. - All Rights Reserved * Copyright 2007-2008 rPath, Inc. - All Rights Reserved
* Copyright 2009 Intel Corporation
* *
* This file is part of the Linux kernel, and is made available under * This file is part of the Linux kernel, and is made available under
* the terms of the GNU General Public License version 2. * the terms of the GNU General Public License version 2.
...@@ -15,16 +16,23 @@ ...@@ -15,16 +16,23 @@
#include "boot.h" #include "boot.h"
#define MAX_8042_LOOPS 100000 #define MAX_8042_LOOPS 100000
#define MAX_8042_FF 32
static int empty_8042(void) static int empty_8042(void)
{ {
u8 status; u8 status;
int loops = MAX_8042_LOOPS; int loops = MAX_8042_LOOPS;
int ffs = MAX_8042_FF;
while (loops--) { while (loops--) {
io_delay(); io_delay();
status = inb(0x64); status = inb(0x64);
if (status == 0xff) {
/* FF is a plausible, but very unlikely status */
if (!--ffs)
return -1; /* Assume no KBC present */
}
if (status & 1) { if (status & 1) {
/* Read and discard input data */ /* Read and discard input data */
io_delay(); io_delay();
...@@ -118,44 +126,37 @@ static void enable_a20_fast(void) ...@@ -118,44 +126,37 @@ static void enable_a20_fast(void)
int enable_a20(void) int enable_a20(void)
{ {
#if defined(CONFIG_X86_ELAN)
/* Elan croaks if we try to touch the KBC */
enable_a20_fast();
while (!a20_test_long())
;
return 0;
#elif defined(CONFIG_X86_VOYAGER)
/* On Voyager, a20_test() is unsafe? */
enable_a20_kbc();
return 0;
#else
int loops = A20_ENABLE_LOOPS; int loops = A20_ENABLE_LOOPS;
while (loops--) { int kbc_err;
/* First, check to see if A20 is already enabled
(legacy free, etc.) */ while (loops--) {
if (a20_test_short()) /* First, check to see if A20 is already enabled
return 0; (legacy free, etc.) */
if (a20_test_short())
/* Next, try the BIOS (INT 0x15, AX=0x2401) */ return 0;
enable_a20_bios();
if (a20_test_short()) /* Next, try the BIOS (INT 0x15, AX=0x2401) */
return 0; enable_a20_bios();
if (a20_test_short())
/* Try enabling A20 through the keyboard controller */ return 0;
empty_8042();
if (a20_test_short()) /* Try enabling A20 through the keyboard controller */
return 0; /* BIOS worked, but with delayed reaction */ kbc_err = empty_8042();
enable_a20_kbc(); if (a20_test_short())
if (a20_test_long()) return 0; /* BIOS worked, but with delayed reaction */
return 0;
if (!kbc_err) {
/* Finally, try enabling the "fast A20 gate" */ enable_a20_kbc();
enable_a20_fast(); if (a20_test_long())
if (a20_test_long()) return 0;
return 0; }
}
/* Finally, try enabling the "fast A20 gate" */
return -1; enable_a20_fast();
#endif if (a20_test_long())
return 0;
}
return -1;
} }
...@@ -302,9 +302,6 @@ void probe_cards(int unsafe); ...@@ -302,9 +302,6 @@ void probe_cards(int unsafe);
/* video-vesa.c */ /* video-vesa.c */
void vesa_store_edid(void); void vesa_store_edid(void);
/* voyager.c */
int query_voyager(void);
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* BOOT_BOOT_H */ #endif /* BOOT_BOOT_H */
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
# create a compressed vmlinux image from the original vmlinux # create a compressed vmlinux image from the original vmlinux
# #
targets := vmlinux vmlinux.bin vmlinux.bin.gz head_$(BITS).o misc.o piggy.o targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma head_$(BITS).o misc.o piggy.o
KBUILD_CFLAGS := -m$(BITS) -D__KERNEL__ $(LINUX_INCLUDE) -O2 KBUILD_CFLAGS := -m$(BITS) -D__KERNEL__ $(LINUX_INCLUDE) -O2
KBUILD_CFLAGS += -fno-strict-aliasing -fPIC KBUILD_CFLAGS += -fno-strict-aliasing -fPIC
...@@ -47,18 +47,35 @@ ifeq ($(CONFIG_X86_32),y) ...@@ -47,18 +47,35 @@ ifeq ($(CONFIG_X86_32),y)
ifdef CONFIG_RELOCATABLE ifdef CONFIG_RELOCATABLE
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin.all FORCE $(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin.all FORCE
$(call if_changed,gzip) $(call if_changed,gzip)
$(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin.all FORCE
$(call if_changed,bzip2)
$(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin.all FORCE
$(call if_changed,lzma)
else else
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE $(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
$(call if_changed,gzip) $(call if_changed,gzip)
$(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin FORCE
$(call if_changed,bzip2)
$(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin FORCE
$(call if_changed,lzma)
endif endif
LDFLAGS_piggy.o := -r --format binary --oformat elf32-i386 -T LDFLAGS_piggy.o := -r --format binary --oformat elf32-i386 -T
else else
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE $(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
$(call if_changed,gzip) $(call if_changed,gzip)
$(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin FORCE
$(call if_changed,bzip2)
$(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin FORCE
$(call if_changed,lzma)
LDFLAGS_piggy.o := -r --format binary --oformat elf64-x86-64 -T LDFLAGS_piggy.o := -r --format binary --oformat elf64-x86-64 -T
endif endif
$(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.gz FORCE suffix_$(CONFIG_KERNEL_GZIP) = gz
suffix_$(CONFIG_KERNEL_BZIP2) = bz2
suffix_$(CONFIG_KERNEL_LZMA) = lzma
$(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.$(suffix_y) FORCE
$(call if_changed,ld) $(call if_changed,ld)
...@@ -25,14 +25,12 @@ ...@@ -25,14 +25,12 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/segment.h> #include <asm/segment.h>
#include <asm/page.h> #include <asm/page_types.h>
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
.section ".text.head","ax",@progbits .section ".text.head","ax",@progbits
.globl startup_32 ENTRY(startup_32)
startup_32:
cld cld
/* test KEEP_SEGMENTS flag to see if the bootloader is asking /* test KEEP_SEGMENTS flag to see if the bootloader is asking
* us to not reload segments */ * us to not reload segments */
...@@ -113,6 +111,8 @@ startup_32: ...@@ -113,6 +111,8 @@ startup_32:
*/ */
leal relocated(%ebx), %eax leal relocated(%ebx), %eax
jmp *%eax jmp *%eax
ENDPROC(startup_32)
.section ".text" .section ".text"
relocated: relocated:
......
...@@ -26,8 +26,8 @@ ...@@ -26,8 +26,8 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/segment.h> #include <asm/segment.h>
#include <asm/pgtable.h> #include <asm/pgtable_types.h>
#include <asm/page.h> #include <asm/page_types.h>
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/msr.h> #include <asm/msr.h>
#include <asm/processor-flags.h> #include <asm/processor-flags.h>
...@@ -35,9 +35,7 @@ ...@@ -35,9 +35,7 @@
.section ".text.head" .section ".text.head"
.code32 .code32
.globl startup_32 ENTRY(startup_32)
startup_32:
cld cld
/* test KEEP_SEGMENTS flag to see if the bootloader is asking /* test KEEP_SEGMENTS flag to see if the bootloader is asking
* us to not reload segments */ * us to not reload segments */
...@@ -176,6 +174,7 @@ startup_32: ...@@ -176,6 +174,7 @@ startup_32:
/* Jump from 32bit compatibility mode into 64bit mode. */ /* Jump from 32bit compatibility mode into 64bit mode. */
lret lret
ENDPROC(startup_32)
no_longmode: no_longmode:
/* This isn't an x86-64 CPU so hang */ /* This isn't an x86-64 CPU so hang */
...@@ -295,7 +294,6 @@ relocated: ...@@ -295,7 +294,6 @@ relocated:
call decompress_kernel call decompress_kernel
popq %rsi popq %rsi
/* /*
* Jump to the decompressed kernel. * Jump to the decompressed kernel.
*/ */
......
...@@ -116,71 +116,13 @@ ...@@ -116,71 +116,13 @@
/* /*
* gzip declarations * gzip declarations
*/ */
#define OF(args) args
#define STATIC static #define STATIC static
#undef memset #undef memset
#undef memcpy #undef memcpy
#define memzero(s, n) memset((s), 0, (n)) #define memzero(s, n) memset((s), 0, (n))
typedef unsigned char uch;
typedef unsigned short ush;
typedef unsigned long ulg;
/*
* Window size must be at least 32k, and a power of two.
* We don't actually have a window just a huge output buffer,
* so we report a 2G window size, as that should always be
* larger than our output buffer:
*/
#define WSIZE 0x80000000
/* Input buffer: */
static unsigned char *inbuf;
/* Sliding window buffer (and final output buffer): */
static unsigned char *window;
/* Valid bytes in inbuf: */
static unsigned insize;
/* Index of next byte to be processed in inbuf: */
static unsigned inptr;
/* Bytes in output buffer: */
static unsigned outcnt;
/* gzip flag byte */
#define ASCII_FLAG 0x01 /* bit 0 set: file probably ASCII text */
#define CONTINUATION 0x02 /* bit 1 set: continuation of multi-part gz file */
#define EXTRA_FIELD 0x04 /* bit 2 set: extra field present */
#define ORIG_NAM 0x08 /* bit 3 set: original file name present */
#define COMMENT 0x10 /* bit 4 set: file comment present */
#define ENCRYPTED 0x20 /* bit 5 set: file is encrypted */
#define RESERVED 0xC0 /* bit 6, 7: reserved */
#define get_byte() (inptr < insize ? inbuf[inptr++] : fill_inbuf())
/* Diagnostic functions */
#ifdef DEBUG
# define Assert(cond, msg) do { if (!(cond)) error(msg); } while (0)
# define Trace(x) do { fprintf x; } while (0)
# define Tracev(x) do { if (verbose) fprintf x ; } while (0)
# define Tracevv(x) do { if (verbose > 1) fprintf x ; } while (0)
# define Tracec(c, x) do { if (verbose && (c)) fprintf x ; } while (0)
# define Tracecv(c, x) do { if (verbose > 1 && (c)) fprintf x ; } while (0)
#else
# define Assert(cond, msg)
# define Trace(x)
# define Tracev(x)
# define Tracevv(x)
# define Tracec(c, x)
# define Tracecv(c, x)
#endif
static int fill_inbuf(void);
static void flush_window(void);
static void error(char *m); static void error(char *m);
/* /*
...@@ -189,13 +131,8 @@ static void error(char *m); ...@@ -189,13 +131,8 @@ static void error(char *m);
static struct boot_params *real_mode; /* Pointer to real-mode data */ static struct boot_params *real_mode; /* Pointer to real-mode data */
static int quiet; static int quiet;
extern unsigned char input_data[];
extern int input_len;
static long bytes_out;
static void *memset(void *s, int c, unsigned n); static void *memset(void *s, int c, unsigned n);
static void *memcpy(void *dest, const void *src, unsigned n); void *memcpy(void *dest, const void *src, unsigned n);
static void __putstr(int, const char *); static void __putstr(int, const char *);
#define putstr(__x) __putstr(0, __x) #define putstr(__x) __putstr(0, __x)
...@@ -213,7 +150,17 @@ static char *vidmem; ...@@ -213,7 +150,17 @@ static char *vidmem;
static int vidport; static int vidport;
static int lines, cols; static int lines, cols;
#include "../../../../lib/inflate.c" #ifdef CONFIG_KERNEL_GZIP
#include "../../../../lib/decompress_inflate.c"
#endif
#ifdef CONFIG_KERNEL_BZIP2
#include "../../../../lib/decompress_bunzip2.c"
#endif
#ifdef CONFIG_KERNEL_LZMA
#include "../../../../lib/decompress_unlzma.c"
#endif
static void scroll(void) static void scroll(void)
{ {
...@@ -282,7 +229,7 @@ static void *memset(void *s, int c, unsigned n) ...@@ -282,7 +229,7 @@ static void *memset(void *s, int c, unsigned n)
return s; return s;
} }
static void *memcpy(void *dest, const void *src, unsigned n) void *memcpy(void *dest, const void *src, unsigned n)
{ {
int i; int i;
const char *s = src; const char *s = src;
...@@ -293,38 +240,6 @@ static void *memcpy(void *dest, const void *src, unsigned n) ...@@ -293,38 +240,6 @@ static void *memcpy(void *dest, const void *src, unsigned n)
return dest; return dest;
} }
/* ===========================================================================
* Fill the input buffer. This is called only when the buffer is empty
* and at least one byte is really needed.
*/
static int fill_inbuf(void)
{
error("ran out of input data");
return 0;
}
/* ===========================================================================
* Write the output window window[0..outcnt-1] and update crc and bytes_out.
* (Used for the decompressed data only.)
*/
static void flush_window(void)
{
/* With my window equal to my output buffer
* I only need to compute the crc here.
*/
unsigned long c = crc; /* temporary variable */
unsigned n;
unsigned char *in, ch;
in = window;
for (n = 0; n < outcnt; n++) {
ch = *in++;
c = crc_32_tab[((int)c ^ ch) & 0xff] ^ (c >> 8);
}
crc = c;
bytes_out += (unsigned long)outcnt;
outcnt = 0;
}
static void error(char *x) static void error(char *x)
{ {
...@@ -407,12 +322,8 @@ asmlinkage void decompress_kernel(void *rmode, memptr heap, ...@@ -407,12 +322,8 @@ asmlinkage void decompress_kernel(void *rmode, memptr heap,
lines = real_mode->screen_info.orig_video_lines; lines = real_mode->screen_info.orig_video_lines;
cols = real_mode->screen_info.orig_video_cols; cols = real_mode->screen_info.orig_video_cols;
window = output; /* Output buffer (Normally at 1M) */
free_mem_ptr = heap; /* Heap */ free_mem_ptr = heap; /* Heap */
free_mem_end_ptr = heap + BOOT_HEAP_SIZE; free_mem_end_ptr = heap + BOOT_HEAP_SIZE;
inbuf = input_data; /* Input buffer */
insize = input_len;
inptr = 0;
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
if ((unsigned long)output & (__KERNEL_ALIGN - 1)) if ((unsigned long)output & (__KERNEL_ALIGN - 1))
...@@ -430,10 +341,9 @@ asmlinkage void decompress_kernel(void *rmode, memptr heap, ...@@ -430,10 +341,9 @@ asmlinkage void decompress_kernel(void *rmode, memptr heap,
#endif #endif
#endif #endif
makecrc();
if (!quiet) if (!quiet)
putstr("\nDecompressing Linux... "); putstr("\nDecompressing Linux... ");
gunzip(); decompress(input_data, input_len, NULL, NULL, output, NULL, error);
parse_elf(output); parse_elf(output);
if (!quiet) if (!quiet)
putstr("done.\nBooting the kernel.\n"); putstr("done.\nBooting the kernel.\n");
......
...@@ -8,6 +8,8 @@ ...@@ -8,6 +8,8 @@
* *
* ----------------------------------------------------------------------- */ * ----------------------------------------------------------------------- */
#include <linux/linkage.h>
/* /*
* Memory copy routines * Memory copy routines
*/ */
...@@ -15,9 +17,7 @@ ...@@ -15,9 +17,7 @@
.code16gcc .code16gcc
.text .text
.globl memcpy GLOBAL(memcpy)
.type memcpy, @function
memcpy:
pushw %si pushw %si
pushw %di pushw %di
movw %ax, %di movw %ax, %di
...@@ -31,11 +31,9 @@ memcpy: ...@@ -31,11 +31,9 @@ memcpy:
popw %di popw %di
popw %si popw %si
ret ret
.size memcpy, .-memcpy ENDPROC(memcpy)
.globl memset GLOBAL(memset)
.type memset, @function
memset:
pushw %di pushw %di
movw %ax, %di movw %ax, %di
movzbl %dl, %eax movzbl %dl, %eax
...@@ -48,52 +46,42 @@ memset: ...@@ -48,52 +46,42 @@ memset:
rep; stosb rep; stosb
popw %di popw %di
ret ret
.size memset, .-memset ENDPROC(memset)
.globl copy_from_fs GLOBAL(copy_from_fs)
.type copy_from_fs, @function
copy_from_fs:
pushw %ds pushw %ds
pushw %fs pushw %fs
popw %ds popw %ds
call memcpy call memcpy
popw %ds popw %ds
ret ret
.size copy_from_fs, .-copy_from_fs ENDPROC(copy_from_fs)
.globl copy_to_fs GLOBAL(copy_to_fs)
.type copy_to_fs, @function
copy_to_fs:
pushw %es pushw %es
pushw %fs pushw %fs
popw %es popw %es
call memcpy call memcpy
popw %es popw %es
ret ret
.size copy_to_fs, .-copy_to_fs ENDPROC(copy_to_fs)
#if 0 /* Not currently used, but can be enabled as needed */ #if 0 /* Not currently used, but can be enabled as needed */
GLOBAL(copy_from_gs)
.globl copy_from_gs
.type copy_from_gs, @function
copy_from_gs:
pushw %ds pushw %ds
pushw %gs pushw %gs
popw %ds popw %ds
call memcpy call memcpy
popw %ds popw %ds
ret ret
.size copy_from_gs, .-copy_from_gs ENDPROC(copy_from_gs)
.globl copy_to_gs
.type copy_to_gs, @function GLOBAL(copy_to_gs)
copy_to_gs:
pushw %es pushw %es
pushw %gs pushw %gs
popw %es popw %es
call memcpy call memcpy
popw %es popw %es
ret ret
.size copy_to_gs, .-copy_to_gs ENDPROC(copy_to_gs)
#endif #endif
...@@ -19,7 +19,7 @@ ...@@ -19,7 +19,7 @@
#include <linux/utsrelease.h> #include <linux/utsrelease.h>
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/e820.h> #include <asm/e820.h>
#include <asm/page.h> #include <asm/page_types.h>
#include <asm/setup.h> #include <asm/setup.h>
#include "boot.h" #include "boot.h"
#include "offsets.h" #include "offsets.h"
......
...@@ -149,11 +149,6 @@ void main(void) ...@@ -149,11 +149,6 @@ void main(void)
/* Query MCA information */ /* Query MCA information */
query_mca(); query_mca();
/* Voyager */
#ifdef CONFIG_X86_VOYAGER
query_voyager();
#endif
/* Query Intel SpeedStep (IST) information */ /* Query Intel SpeedStep (IST) information */
query_ist(); query_ist();
......
...@@ -15,18 +15,15 @@ ...@@ -15,18 +15,15 @@
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/processor-flags.h> #include <asm/processor-flags.h>
#include <asm/segment.h> #include <asm/segment.h>
#include <linux/linkage.h>
.text .text
.globl protected_mode_jump
.type protected_mode_jump, @function
.code16 .code16
/* /*
* void protected_mode_jump(u32 entrypoint, u32 bootparams); * void protected_mode_jump(u32 entrypoint, u32 bootparams);
*/ */
protected_mode_jump: GLOBAL(protected_mode_jump)
movl %edx, %esi # Pointer to boot_params table movl %edx, %esi # Pointer to boot_params table
xorl %ebx, %ebx xorl %ebx, %ebx
...@@ -47,12 +44,10 @@ protected_mode_jump: ...@@ -47,12 +44,10 @@ protected_mode_jump:
.byte 0x66, 0xea # ljmpl opcode .byte 0x66, 0xea # ljmpl opcode
2: .long in_pm32 # offset 2: .long in_pm32 # offset
.word __BOOT_CS # segment .word __BOOT_CS # segment
ENDPROC(protected_mode_jump)
.size protected_mode_jump, .-protected_mode_jump
.code32 .code32
.type in_pm32, @function GLOBAL(in_pm32)
in_pm32:
# Set up data segments for flat 32-bit mode # Set up data segments for flat 32-bit mode
movl %ecx, %ds movl %ecx, %ds
movl %ecx, %es movl %ecx, %es
...@@ -78,5 +73,4 @@ in_pm32: ...@@ -78,5 +73,4 @@ in_pm32:
lldt %cx lldt %cx
jmpl *%eax # Jump to the 32-bit entrypoint jmpl *%eax # Jump to the 32-bit entrypoint
ENDPROC(in_pm32)
.size in_pm32, .-in_pm32
/* -*- linux-c -*- ------------------------------------------------------- *
*
* Copyright (C) 1991, 1992 Linus Torvalds
* Copyright 2007 rPath, Inc. - All Rights Reserved
*
* This file is part of the Linux kernel, and is made available under
* the terms of the GNU General Public License version 2.
*
* ----------------------------------------------------------------------- */
/*
* Get the Voyager config information
*/
#include "boot.h"
int query_voyager(void)
{
u8 err;
u16 es, di;
/* Abuse the apm_bios_info area for this */
u8 *data_ptr = (u8 *)&boot_params.apm_bios_info;
data_ptr[0] = 0xff; /* Flag on config not found(?) */
asm("pushw %%es ; "
"int $0x15 ; "
"setc %0 ; "
"movw %%es, %1 ; "
"popw %%es"
: "=q" (err), "=r" (es), "=D" (di)
: "a" (0xffc0));
if (err)
return -1; /* Not Voyager */
set_fs(es);
copy_from_fs(data_ptr, di, 7); /* Table is 7 bytes apparently */
return 0;
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -55,7 +55,7 @@ static inline void aout_dump_thread(struct pt_regs *regs, struct user *dump) ...@@ -55,7 +55,7 @@ static inline void aout_dump_thread(struct pt_regs *regs, struct user *dump)
dump->regs.ds = (u16)regs->ds; dump->regs.ds = (u16)regs->ds;
dump->regs.es = (u16)regs->es; dump->regs.es = (u16)regs->es;
dump->regs.fs = (u16)regs->fs; dump->regs.fs = (u16)regs->fs;
savesegment(gs, dump->regs.gs); dump->regs.gs = get_user_gs(regs);
dump->regs.orig_ax = regs->orig_ax; dump->regs.orig_ax = regs->orig_ax;
dump->regs.ip = regs->ip; dump->regs.ip = regs->ip;
dump->regs.cs = (u16)regs->cs; dump->regs.cs = (u16)regs->cs;
......
...@@ -102,9 +102,6 @@ static inline void disable_acpi(void) ...@@ -102,9 +102,6 @@ static inline void disable_acpi(void)
acpi_noirq = 1; acpi_noirq = 1;
} }
/* Fixmap pages to reserve for ACPI boot-time tables (see fixmap.h) */
#define FIX_ACPI_PAGES 4
extern int acpi_gsi_to_irq(u32 gsi, unsigned int *irq); extern int acpi_gsi_to_irq(u32 gsi, unsigned int *irq);
static inline void acpi_noirq_set(void) { acpi_noirq = 1; } static inline void acpi_noirq_set(void) { acpi_noirq = 1; }
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment