Commit 3de352bb authored by Ingo Molnar's avatar Ingo Molnar

Merge branch 'x86/mpparse' into x86/devel

Conflicts:

	arch/x86/Kconfig
	arch/x86/kernel/io_apic_32.c
	arch/x86/kernel/setup_64.c
	arch/x86/mm/init_32.c
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parents 1b8ba39a 9340e1cc
...@@ -610,6 +610,29 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -610,6 +610,29 @@ and is between 256 and 4096 characters. It is defined in the file
See drivers/char/README.epca and See drivers/char/README.epca and
Documentation/digiepca.txt. Documentation/digiepca.txt.
disable_mtrr_cleanup [X86]
enable_mtrr_cleanup [X86]
The kernel tries to adjust MTRR layout from continuous
to discrete, to make X server driver able to add WB
entry later. This parameter enables/disables that.
mtrr_chunk_size=nn[KMG] [X86]
used for mtrr cleanup. It is largest continous chunk
that could hold holes aka. UC entries.
mtrr_gran_size=nn[KMG] [X86]
Used for mtrr cleanup. It is granularity of mtrr block.
Default is 1.
Large value could prevent small alignment from
using up MTRRs.
mtrr_spare_reg_nr=n [X86]
Format: <integer>
Range: 0,7 : spare reg number
Default : 1
Used for mtrr cleanup. It is spare mtrr entries number.
Set to 2 or more if your graphical card needs more.
disable_mtrr_trim [X86, Intel and AMD only] disable_mtrr_trim [X86, Intel and AMD only]
By default the kernel will trim any uncacheable By default the kernel will trim any uncacheable
memory out of your available memory pool based on memory out of your available memory pool based on
......
...@@ -230,6 +230,27 @@ config SMP ...@@ -230,6 +230,27 @@ config SMP
If you don't know what to do here, say N. If you don't know what to do here, say N.
config X86_FIND_SMP_CONFIG
def_bool y
depends on X86_MPPARSE || X86_VOYAGER || X86_VISWS
depends on X86_32
if ACPI
config X86_MPPARSE
def_bool y
bool "Enable MPS table"
depends on X86_LOCAL_APIC && !X86_VISWS
help
For old smp systems that do not have proper acpi support. Newer systems
(esp with 64bit cpus) with acpi support, MADT and DSDT will override it
endif
if !ACPI
config X86_MPPARSE
def_bool y
depends on X86_LOCAL_APIC && !X86_VISWS
endif
choice choice
prompt "Subarchitecture Type" prompt "Subarchitecture Type"
default X86_PC default X86_PC
...@@ -261,36 +282,6 @@ config X86_VOYAGER ...@@ -261,36 +282,6 @@ config X86_VOYAGER
If you do not specifically know you have a Voyager based machine, If you do not specifically know you have a Voyager based machine,
say N here, otherwise the kernel you build will not be bootable. say N here, otherwise the kernel you build will not be bootable.
config X86_NUMAQ
bool "NUMAQ (IBM/Sequent)"
depends on SMP && X86_32 && PCI
select NUMA
help
This option is used for getting Linux to run on a (IBM/Sequent) NUMA
multiquad box. This changes the way that processors are bootstrapped,
and uses Clustered Logical APIC addressing mode instead of Flat Logical.
You will need a new lynxer.elf file to flash your firmware with - send
email to <Martin.Bligh@us.ibm.com>.
config X86_SUMMIT
bool "Summit/EXA (IBM x440)"
depends on X86_32 && SMP
help
This option is needed for IBM systems that use the Summit/EXA chipset.
In particular, it is needed for the x440.
If you don't have one of these computers, you should say N here.
If you want to build a NUMA kernel, you must select ACPI.
config X86_BIGSMP
bool "Support for other sub-arch SMP systems with more than 8 CPUs"
depends on X86_32 && SMP
help
This option is needed for the systems that have more than 8 CPUs
and if the system is not of any sub-arch type above.
If you don't have such a system, you should say N here.
config X86_VISWS config X86_VISWS
bool "SGI 320/540 (Visual Workstation)" bool "SGI 320/540 (Visual Workstation)"
depends on X86_32 && !PCI depends on X86_32 && !PCI
...@@ -304,12 +295,33 @@ config X86_VISWS ...@@ -304,12 +295,33 @@ config X86_VISWS
and vice versa. See <file:Documentation/sgi-visws.txt> for details. and vice versa. See <file:Documentation/sgi-visws.txt> for details.
config X86_GENERICARCH config X86_GENERICARCH
bool "Generic architecture (Summit, bigsmp, ES7000, default)" bool "Generic architecture"
depends on X86_32 depends on X86_32
help help
This option compiles in the Summit, bigsmp, ES7000, default subarchitectures. This option compiles in the NUMAQ, Summit, bigsmp, ES7000, default
It is intended for a generic binary kernel. subarchitectures. It is intended for a generic binary kernel.
If you want a NUMA kernel, select ACPI. We need SRAT for NUMA. if you select them all, kernel will probe it one by one. and will
fallback to default.
if X86_GENERICARCH
config X86_NUMAQ
bool "NUMAQ (IBM/Sequent)"
depends on SMP && X86_32 && PCI && X86_MPPARSE
select NUMA
help
This option is used for getting Linux to run on a NUMAQ (IBM/Sequent)
NUMA multiquad box. This changes the way that processors are
bootstrapped, and uses Clustered Logical APIC addressing mode instead
of Flat Logical. You will need a new lynxer.elf file to flash your
firmware with - send email to <Martin.Bligh@us.ibm.com>.
config X86_SUMMIT
bool "Summit/EXA (IBM x440)"
depends on X86_32 && SMP
help
This option is needed for IBM systems that use the Summit/EXA chipset.
In particular, it is needed for the x440.
config X86_ES7000 config X86_ES7000
bool "Support for Unisys ES7000 IA32 series" bool "Support for Unisys ES7000 IA32 series"
...@@ -317,8 +329,15 @@ config X86_ES7000 ...@@ -317,8 +329,15 @@ config X86_ES7000
help help
Support for Unisys ES7000 systems. Say 'Y' here if this kernel is Support for Unisys ES7000 systems. Say 'Y' here if this kernel is
supposed to run on an IA32-based Unisys ES7000 system. supposed to run on an IA32-based Unisys ES7000 system.
Only choose this option if you have such a system, otherwise you
should say N here. config X86_BIGSMP
bool "Support for big SMP systems with more than 8 CPUs"
depends on X86_32 && SMP
help
This option is needed for the systems that have more than 8 CPUs
and if the system is not of any sub-arch type above.
endif
config X86_RDC321X config X86_RDC321X
bool "RDC R-321x SoC" bool "RDC R-321x SoC"
...@@ -432,7 +451,7 @@ config MEMTEST ...@@ -432,7 +451,7 @@ config MEMTEST
config ACPI_SRAT config ACPI_SRAT
def_bool y def_bool y
depends on X86_32 && ACPI && NUMA && (X86_SUMMIT || X86_GENERICARCH) depends on X86_32 && ACPI && NUMA && X86_GENERICARCH
select ACPI_NUMA select ACPI_NUMA
config HAVE_ARCH_PARSE_SRAT config HAVE_ARCH_PARSE_SRAT
...@@ -441,11 +460,11 @@ config HAVE_ARCH_PARSE_SRAT ...@@ -441,11 +460,11 @@ config HAVE_ARCH_PARSE_SRAT
config X86_SUMMIT_NUMA config X86_SUMMIT_NUMA
def_bool y def_bool y
depends on X86_32 && NUMA && (X86_SUMMIT || X86_GENERICARCH) depends on X86_32 && NUMA && X86_GENERICARCH
config X86_CYCLONE_TIMER config X86_CYCLONE_TIMER
def_bool y def_bool y
depends on X86_32 && X86_SUMMIT || X86_GENERICARCH depends on X86_GENERICARCH
config ES7000_CLUSTERED_APIC config ES7000_CLUSTERED_APIC
def_bool y def_bool y
...@@ -910,9 +929,9 @@ config X86_PAE ...@@ -910,9 +929,9 @@ config X86_PAE
config NUMA config NUMA
bool "Numa Memory Allocation and Scheduler Support (EXPERIMENTAL)" bool "Numa Memory Allocation and Scheduler Support (EXPERIMENTAL)"
depends on SMP depends on SMP
depends on X86_64 || (X86_32 && HIGHMEM64G && (X86_NUMAQ || (X86_SUMMIT || X86_GENERICARCH) && ACPI) && EXPERIMENTAL) depends on X86_64 || (X86_32 && HIGHMEM64G && (X86_NUMAQ || X86_BIGSMP || X86_SUMMIT && ACPI) && EXPERIMENTAL)
default n if X86_PC default n if X86_PC
default y if (X86_NUMAQ || X86_SUMMIT) default y if (X86_NUMAQ || X86_SUMMIT || X86_BIGSMP)
help help
Enable NUMA (Non Uniform Memory Access) support. Enable NUMA (Non Uniform Memory Access) support.
The kernel will try to allocate memory used by a CPU on the The kernel will try to allocate memory used by a CPU on the
...@@ -1089,6 +1108,40 @@ config MTRR ...@@ -1089,6 +1108,40 @@ config MTRR
See <file:Documentation/mtrr.txt> for more information. See <file:Documentation/mtrr.txt> for more information.
config MTRR_SANITIZER
def_bool y
prompt "MTRR cleanup support"
depends on MTRR
help
Convert MTRR layout from continuous to discrete, so some X driver
could add WB entries.
Say N here if you see bootup problems (boot crash, boot hang,
spontaneous reboots).
Could be disabled with disable_mtrr_cleanup. Also mtrr_chunk_size
could be used to send largest mtrr entry size for continuous block
to hold holes (aka. UC entries)
If unsure, say Y.
config MTRR_SANITIZER_ENABLE_DEFAULT
int "MTRR cleanup enable value (0-1)"
range 0 1
default "0"
depends on MTRR_SANITIZER
help
Enable mtrr cleanup default value
config MTRR_SANITIZER_SPARE_REG_NR_DEFAULT
int "MTRR cleanup spare reg num (0-7)"
range 0 7
default "1"
depends on MTRR_SANITIZER
help
mtrr cleanup spare entries default, it can be changed via
mtrr_spare_reg_nr=
config X86_PAT config X86_PAT
bool bool
prompt "x86 PAT support" prompt "x86 PAT support"
......
...@@ -137,15 +137,6 @@ config 4KSTACKS ...@@ -137,15 +137,6 @@ config 4KSTACKS
on the VM subsystem for higher order allocations. This option on the VM subsystem for higher order allocations. This option
will also use IRQ stacks to compensate for the reduced stackspace. will also use IRQ stacks to compensate for the reduced stackspace.
config X86_FIND_SMP_CONFIG
def_bool y
depends on X86_LOCAL_APIC || X86_VOYAGER
depends on X86_32
config X86_MPPARSE
def_bool y
depends on (X86_32 && (X86_LOCAL_APIC && !X86_VISWS)) || X86_64
config DOUBLEFAULT config DOUBLEFAULT
default y default y
bool "Enable doublefault exception handler" if EMBEDDED bool "Enable doublefault exception handler" if EMBEDDED
......
...@@ -117,29 +117,11 @@ mcore-$(CONFIG_X86_VOYAGER) := arch/x86/mach-voyager/ ...@@ -117,29 +117,11 @@ mcore-$(CONFIG_X86_VOYAGER) := arch/x86/mach-voyager/
mflags-$(CONFIG_X86_VISWS) := -Iinclude/asm-x86/mach-visws mflags-$(CONFIG_X86_VISWS) := -Iinclude/asm-x86/mach-visws
mcore-$(CONFIG_X86_VISWS) := arch/x86/mach-visws/ mcore-$(CONFIG_X86_VISWS) := arch/x86/mach-visws/
# NUMAQ subarch support
mflags-$(CONFIG_X86_NUMAQ) := -Iinclude/asm-x86/mach-numaq
mcore-$(CONFIG_X86_NUMAQ) := arch/x86/mach-default/
# BIGSMP subarch support
mflags-$(CONFIG_X86_BIGSMP) := -Iinclude/asm-x86/mach-bigsmp
mcore-$(CONFIG_X86_BIGSMP) := arch/x86/mach-default/
#Summit subarch support
mflags-$(CONFIG_X86_SUMMIT) := -Iinclude/asm-x86/mach-summit
mcore-$(CONFIG_X86_SUMMIT) := arch/x86/mach-default/
# generic subarchitecture # generic subarchitecture
mflags-$(CONFIG_X86_GENERICARCH):= -Iinclude/asm-x86/mach-generic mflags-$(CONFIG_X86_GENERICARCH):= -Iinclude/asm-x86/mach-generic
fcore-$(CONFIG_X86_GENERICARCH) += arch/x86/mach-generic/ fcore-$(CONFIG_X86_GENERICARCH) += arch/x86/mach-generic/
mcore-$(CONFIG_X86_GENERICARCH) := arch/x86/mach-default/ mcore-$(CONFIG_X86_GENERICARCH) := arch/x86/mach-default/
# ES7000 subarch support
mflags-$(CONFIG_X86_ES7000) := -Iinclude/asm-x86/mach-es7000
fcore-$(CONFIG_X86_ES7000) := arch/x86/mach-es7000/
mcore-$(CONFIG_X86_ES7000) := arch/x86/mach-default/
# RDC R-321x subarch support # RDC R-321x subarch support
mflags-$(CONFIG_X86_RDC321X) := -Iinclude/asm-x86/mach-rdc321x mflags-$(CONFIG_X86_RDC321X) := -Iinclude/asm-x86/mach-rdc321x
mcore-$(CONFIG_X86_RDC321X) := arch/x86/mach-default/ mcore-$(CONFIG_X86_RDC321X) := arch/x86/mach-default/
...@@ -160,6 +142,7 @@ KBUILD_AFLAGS += $(mflags-y) ...@@ -160,6 +142,7 @@ KBUILD_AFLAGS += $(mflags-y)
head-y := arch/x86/kernel/head_$(BITS).o head-y := arch/x86/kernel/head_$(BITS).o
head-y += arch/x86/kernel/head$(BITS).o head-y += arch/x86/kernel/head$(BITS).o
head-y += arch/x86/kernel/head.o
head-y += arch/x86/kernel/init_task.o head-y += arch/x86/kernel/init_task.o
libs-y += arch/x86/lib/ libs-y += arch/x86/lib/
......
...@@ -218,10 +218,6 @@ static char *vidmem; ...@@ -218,10 +218,6 @@ static char *vidmem;
static int vidport; static int vidport;
static int lines, cols; static int lines, cols;
#ifdef CONFIG_X86_NUMAQ
void *xquad_portio;
#endif
#include "../../../../lib/inflate.c" #include "../../../../lib/inflate.c"
static void *malloc(int size) static void *malloc(int size)
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
*/ */
#include "boot.h" #include "boot.h"
#include <linux/kernel.h>
#define SMAP 0x534d4150 /* ASCII "SMAP" */ #define SMAP 0x534d4150 /* ASCII "SMAP" */
...@@ -53,7 +54,7 @@ static int detect_memory_e820(void) ...@@ -53,7 +54,7 @@ static int detect_memory_e820(void)
count++; count++;
desc++; desc++;
} while (next && count < E820MAX); } while (next && count < ARRAY_SIZE(boot_params.e820_map));
return boot_params.e820_entries = count; return boot_params.e820_entries = count;
} }
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
# Makefile for the linux kernel. # Makefile for the linux kernel.
# #
extra-y := head_$(BITS).o head$(BITS).o init_task.o vmlinux.lds extra-y := head_$(BITS).o head$(BITS).o head.o init_task.o vmlinux.lds
CPPFLAGS_vmlinux.lds += -U$(UTS_MACHINE) CPPFLAGS_vmlinux.lds += -U$(UTS_MACHINE)
...@@ -22,7 +22,7 @@ obj-y += setup_$(BITS).o i8259.o irqinit_$(BITS).o setup.o ...@@ -22,7 +22,7 @@ obj-y += setup_$(BITS).o i8259.o irqinit_$(BITS).o setup.o
obj-$(CONFIG_X86_32) += sys_i386_32.o i386_ksyms_32.o obj-$(CONFIG_X86_32) += sys_i386_32.o i386_ksyms_32.o
obj-$(CONFIG_X86_64) += sys_x86_64.o x8664_ksyms_64.o obj-$(CONFIG_X86_64) += sys_x86_64.o x8664_ksyms_64.o
obj-$(CONFIG_X86_64) += syscall_64.o vsyscall_64.o setup64.o obj-$(CONFIG_X86_64) += syscall_64.o vsyscall_64.o setup64.o
obj-y += bootflag.o e820_$(BITS).o obj-y += bootflag.o e820.o
obj-y += pci-dma.o quirks.o i8237.o topology.o kdebugfs.o obj-y += pci-dma.o quirks.o i8237.o topology.o kdebugfs.o
obj-y += alternative.o i8253.o pci-nommu.o obj-y += alternative.o i8253.o pci-nommu.o
obj-y += tsc_$(BITS).o io_delay.o rtc.o obj-y += tsc_$(BITS).o io_delay.o rtc.o
......
This diff is collapsed.
...@@ -328,7 +328,7 @@ void __init early_gart_iommu_check(void) ...@@ -328,7 +328,7 @@ void __init early_gart_iommu_check(void)
E820_RAM)) { E820_RAM)) {
/* reserve it, so we can reuse it in second kernel */ /* reserve it, so we can reuse it in second kernel */
printk(KERN_INFO "update e820 for GART\n"); printk(KERN_INFO "update e820 for GART\n");
add_memory_region(aper_base, aper_size, E820_RESERVED); e820_add_region(aper_base, aper_size, E820_RESERVED);
update_e820(); update_e820();
} }
} }
......
...@@ -79,6 +79,11 @@ char system_vectors[NR_VECTORS] = { [0 ... NR_VECTORS-1] = SYS_VECTOR_FREE}; ...@@ -79,6 +79,11 @@ char system_vectors[NR_VECTORS] = { [0 ... NR_VECTORS-1] = SYS_VECTOR_FREE};
*/ */
int apic_verbosity; int apic_verbosity;
int pic_mode;
/* Have we found an MP table */
int smp_found_config;
static unsigned int calibration_result; static unsigned int calibration_result;
static int lapic_next_event(unsigned long delta, static int lapic_next_event(unsigned long delta,
...@@ -1202,7 +1207,7 @@ void __init init_apic_mappings(void) ...@@ -1202,7 +1207,7 @@ void __init init_apic_mappings(void)
for (i = 0; i < nr_ioapics; i++) { for (i = 0; i < nr_ioapics; i++) {
if (smp_found_config) { if (smp_found_config) {
ioapic_phys = mp_ioapics[i].mpc_apicaddr; ioapic_phys = mp_ioapics[i].mp_apicaddr;
if (!ioapic_phys) { if (!ioapic_phys) {
printk(KERN_ERR printk(KERN_ERR
"WARNING: bogus zero IO-APIC " "WARNING: bogus zero IO-APIC "
...@@ -1517,6 +1522,9 @@ void __cpuinit generic_processor_info(int apicid, int version) ...@@ -1517,6 +1522,9 @@ void __cpuinit generic_processor_info(int apicid, int version)
*/ */
cpu = 0; cpu = 0;
if (apicid > max_physical_apicid)
max_physical_apicid = apicid;
/* /*
* Would be preferable to switch to bigsmp when CONFIG_HOTPLUG_CPU=y * Would be preferable to switch to bigsmp when CONFIG_HOTPLUG_CPU=y
* but we need to work other dependencies like SMP_SUSPEND etc * but we need to work other dependencies like SMP_SUSPEND etc
...@@ -1524,7 +1532,7 @@ void __cpuinit generic_processor_info(int apicid, int version) ...@@ -1524,7 +1532,7 @@ void __cpuinit generic_processor_info(int apicid, int version)
* if (CPU_HOTPLUG_ENABLED || num_processors > 8) * if (CPU_HOTPLUG_ENABLED || num_processors > 8)
* - Ashok Raj <ashok.raj@intel.com> * - Ashok Raj <ashok.raj@intel.com>
*/ */
if (num_processors > 8) { if (max_physical_apicid >= 8) {
switch (boot_cpu_data.x86_vendor) { switch (boot_cpu_data.x86_vendor) {
case X86_VENDOR_INTEL: case X86_VENDOR_INTEL:
if (!APIC_XAPIC(version)) { if (!APIC_XAPIC(version)) {
......
...@@ -56,6 +56,9 @@ EXPORT_SYMBOL_GPL(local_apic_timer_c2_ok); ...@@ -56,6 +56,9 @@ EXPORT_SYMBOL_GPL(local_apic_timer_c2_ok);
*/ */
int apic_verbosity; int apic_verbosity;
/* Have we found an MP table */
int smp_found_config;
static struct resource lapic_resource = { static struct resource lapic_resource = {
.name = "Local APIC", .name = "Local APIC",
.flags = IORESOURCE_MEM | IORESOURCE_BUSY, .flags = IORESOURCE_MEM | IORESOURCE_BUSY,
...@@ -1068,6 +1071,9 @@ void __cpuinit generic_processor_info(int apicid, int version) ...@@ -1068,6 +1071,9 @@ void __cpuinit generic_processor_info(int apicid, int version)
*/ */
cpu = 0; cpu = 0;
} }
if (apicid > max_physical_apicid)
max_physical_apicid = apicid;
/* are we being called early in kernel startup? */ /* are we being called early in kernel startup? */
if (x86_cpu_to_apicid_early_ptr) { if (x86_cpu_to_apicid_early_ptr) {
u16 *cpu_to_apicid = x86_cpu_to_apicid_early_ptr; u16 *cpu_to_apicid = x86_cpu_to_apicid_early_ptr;
......
...@@ -37,7 +37,7 @@ static struct fixed_range_block fixed_range_blocks[] = { ...@@ -37,7 +37,7 @@ static struct fixed_range_block fixed_range_blocks[] = {
static unsigned long smp_changes_mask; static unsigned long smp_changes_mask;
static struct mtrr_state mtrr_state = {}; static struct mtrr_state mtrr_state = {};
static int mtrr_state_set; static int mtrr_state_set;
static u64 tom2; u64 mtrr_tom2;
#undef MODULE_PARAM_PREFIX #undef MODULE_PARAM_PREFIX
#define MODULE_PARAM_PREFIX "mtrr." #define MODULE_PARAM_PREFIX "mtrr."
...@@ -139,8 +139,8 @@ u8 mtrr_type_lookup(u64 start, u64 end) ...@@ -139,8 +139,8 @@ u8 mtrr_type_lookup(u64 start, u64 end)
} }
} }
if (tom2) { if (mtrr_tom2) {
if (start >= (1ULL<<32) && (end < tom2)) if (start >= (1ULL<<32) && (end < mtrr_tom2))
return MTRR_TYPE_WRBACK; return MTRR_TYPE_WRBACK;
} }
...@@ -158,6 +158,20 @@ get_mtrr_var_range(unsigned int index, struct mtrr_var_range *vr) ...@@ -158,6 +158,20 @@ get_mtrr_var_range(unsigned int index, struct mtrr_var_range *vr)
rdmsr(MTRRphysMask_MSR(index), vr->mask_lo, vr->mask_hi); rdmsr(MTRRphysMask_MSR(index), vr->mask_lo, vr->mask_hi);
} }
/* fill the MSR pair relating to a var range */
void fill_mtrr_var_range(unsigned int index,
u32 base_lo, u32 base_hi, u32 mask_lo, u32 mask_hi)
{
struct mtrr_var_range *vr;
vr = mtrr_state.var_ranges;
vr[index].base_lo = base_lo;
vr[index].base_hi = base_hi;
vr[index].mask_lo = mask_lo;
vr[index].mask_hi = mask_hi;
}
static void static void
get_fixed_ranges(mtrr_type * frs) get_fixed_ranges(mtrr_type * frs)
{ {
...@@ -213,13 +227,13 @@ void __init get_mtrr_state(void) ...@@ -213,13 +227,13 @@ void __init get_mtrr_state(void)
mtrr_state.enabled = (lo & 0xc00) >> 10; mtrr_state.enabled = (lo & 0xc00) >> 10;
if (amd_special_default_mtrr()) { if (amd_special_default_mtrr()) {
unsigned lo, hi; unsigned low, high;
/* TOP_MEM2 */ /* TOP_MEM2 */
rdmsr(MSR_K8_TOP_MEM2, lo, hi); rdmsr(MSR_K8_TOP_MEM2, low, high);
tom2 = hi; mtrr_tom2 = high;
tom2 <<= 32; mtrr_tom2 <<= 32;
tom2 |= lo; mtrr_tom2 |= low;
tom2 &= 0xffffff8000000ULL; mtrr_tom2 &= 0xffffff800000ULL;
} }
if (mtrr_show) { if (mtrr_show) {
int high_width; int high_width;
...@@ -251,9 +265,9 @@ void __init get_mtrr_state(void) ...@@ -251,9 +265,9 @@ void __init get_mtrr_state(void)
else else
printk(KERN_INFO "MTRR %u disabled\n", i); printk(KERN_INFO "MTRR %u disabled\n", i);
} }
if (tom2) { if (mtrr_tom2) {
printk(KERN_INFO "TOM2: %016llx aka %lldM\n", printk(KERN_INFO "TOM2: %016llx aka %lldM\n",
tom2, tom2>>20); mtrr_tom2, mtrr_tom2>>20);
} }
} }
mtrr_state_set = 1; mtrr_state_set = 1;
...@@ -328,7 +342,7 @@ static void set_fixed_range(int msr, bool *changed, unsigned int *msrwords) ...@@ -328,7 +342,7 @@ static void set_fixed_range(int msr, bool *changed, unsigned int *msrwords)
if (lo != msrwords[0] || hi != msrwords[1]) { if (lo != msrwords[0] || hi != msrwords[1]) {
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD && if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
boot_cpu_data.x86 == 15 && (boot_cpu_data.x86 >= 0x0f && boot_cpu_data.x86 <= 0x11) &&
((msrwords[0] | msrwords[1]) & K8_MTRR_RDMEM_WRMEM_MASK)) ((msrwords[0] | msrwords[1]) & K8_MTRR_RDMEM_WRMEM_MASK))
k8_enable_fixed_iorrs(); k8_enable_fixed_iorrs();
mtrr_wrmsr(msr, msrwords[0], msrwords[1]); mtrr_wrmsr(msr, msrwords[0], msrwords[1]);
......
This diff is collapsed.
...@@ -81,6 +81,8 @@ void set_mtrr_done(struct set_mtrr_context *ctxt); ...@@ -81,6 +81,8 @@ void set_mtrr_done(struct set_mtrr_context *ctxt);
void set_mtrr_cache_disable(struct set_mtrr_context *ctxt); void set_mtrr_cache_disable(struct set_mtrr_context *ctxt);
void set_mtrr_prepare_save(struct set_mtrr_context *ctxt); void set_mtrr_prepare_save(struct set_mtrr_context *ctxt);
void fill_mtrr_var_range(unsigned int index,
u32 base_lo, u32 base_hi, u32 mask_lo, u32 mask_hi);
void get_mtrr_state(void); void get_mtrr_state(void);
extern void set_mtrr_ops(struct mtrr_ops * ops); extern void set_mtrr_ops(struct mtrr_ops * ops);
...@@ -92,6 +94,7 @@ extern struct mtrr_ops * mtrr_if; ...@@ -92,6 +94,7 @@ extern struct mtrr_ops * mtrr_if;
#define use_intel() (mtrr_if && mtrr_if->use_intel_if == 1) #define use_intel() (mtrr_if && mtrr_if->use_intel_if == 1)
extern unsigned int num_var_ranges; extern unsigned int num_var_ranges;
extern u64 mtrr_tom2;
void mtrr_state_warn(void); void mtrr_state_warn(void);
const char *mtrr_attrib_to_str(int x); const char *mtrr_attrib_to_str(int x);
......
This diff is collapsed.
...@@ -213,6 +213,48 @@ unsigned long efi_get_time(void) ...@@ -213,6 +213,48 @@ unsigned long efi_get_time(void)
eft.minute, eft.second); eft.minute, eft.second);
} }
/*
* Tell the kernel about the EFI memory map. This might include
* more than the max 128 entries that can fit in the e820 legacy
* (zeropage) memory map.
*/
static void __init add_efi_memmap(void)
{
void *p;
for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
efi_memory_desc_t *md = p;
unsigned long long start = md->phys_addr;
unsigned long long size = md->num_pages << EFI_PAGE_SHIFT;
int e820_type;
if (md->attribute & EFI_MEMORY_WB)
e820_type = E820_RAM;
else
e820_type = E820_RESERVED;
e820_add_region(start, size, e820_type);
}
sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
}
void __init efi_reserve_early(void)
{
unsigned long pmap;
pmap = boot_params.efi_info.efi_memmap;
#ifdef CONFIG_X86_64
pmap += (__u64)boot_params.efi_info.efi_memmap_hi << 32;
#endif
memmap.phys_map = (void *)pmap;
memmap.nr_map = boot_params.efi_info.efi_memmap_size /
boot_params.efi_info.efi_memdesc_size;
memmap.desc_version = boot_params.efi_info.efi_memdesc_version;
memmap.desc_size = boot_params.efi_info.efi_memdesc_size;
reserve_early(pmap, pmap + memmap.nr_map * memmap.desc_size,
"EFI memmap");
}
#if EFI_DEBUG #if EFI_DEBUG
static void __init print_efi_memmap(void) static void __init print_efi_memmap(void)
{ {
...@@ -242,21 +284,11 @@ void __init efi_init(void) ...@@ -242,21 +284,11 @@ void __init efi_init(void)
int i = 0; int i = 0;
void *tmp; void *tmp;
#ifdef CONFIG_X86_32
efi_phys.systab = (efi_system_table_t *)boot_params.efi_info.efi_systab; efi_phys.systab = (efi_system_table_t *)boot_params.efi_info.efi_systab;
memmap.phys_map = (void *)boot_params.efi_info.efi_memmap; #ifdef CONFIG_X86_64
#else efi_phys.systab = (void *)efi_phys.systab +
efi_phys.systab = (efi_system_table_t *) ((__u64)boot_params.efi_info.efi_systab_hi<<32);
(boot_params.efi_info.efi_systab |
((__u64)boot_params.efi_info.efi_systab_hi<<32));
memmap.phys_map = (void *)
(boot_params.efi_info.efi_memmap |
((__u64)boot_params.efi_info.efi_memmap_hi<<32));
#endif #endif
memmap.nr_map = boot_params.efi_info.efi_memmap_size /
boot_params.efi_info.efi_memdesc_size;
memmap.desc_version = boot_params.efi_info.efi_memdesc_version;
memmap.desc_size = boot_params.efi_info.efi_memdesc_size;
efi.systab = early_ioremap((unsigned long)efi_phys.systab, efi.systab = early_ioremap((unsigned long)efi_phys.systab,
sizeof(efi_system_table_t)); sizeof(efi_system_table_t));
...@@ -370,6 +402,7 @@ void __init efi_init(void) ...@@ -370,6 +402,7 @@ void __init efi_init(void)
if (memmap.desc_size != sizeof(efi_memory_desc_t)) if (memmap.desc_size != sizeof(efi_memory_desc_t))
printk(KERN_WARNING "Kernel-defined memdesc" printk(KERN_WARNING "Kernel-defined memdesc"
"doesn't match the one from EFI!\n"); "doesn't match the one from EFI!\n");
add_efi_memmap();
/* Setup for EFI runtime service */ /* Setup for EFI runtime service */
reboot_type = BOOT_EFI; reboot_type = BOOT_EFI;
......
...@@ -97,13 +97,7 @@ void __init efi_call_phys_epilog(void) ...@@ -97,13 +97,7 @@ void __init efi_call_phys_epilog(void)
early_runtime_code_mapping_set_exec(0); early_runtime_code_mapping_set_exec(0);
} }
void __init efi_reserve_bootmem(void) void __iomem *__init efi_ioremap(unsigned long phys_addr, unsigned long size)
{
reserve_bootmem_generic((unsigned long)memmap.phys_map,
memmap.nr_map * memmap.desc_size);
}
void __iomem * __init efi_ioremap(unsigned long phys_addr, unsigned long size)
{ {
static unsigned pages_mapped __initdata; static unsigned pages_mapped __initdata;
unsigned i, pages; unsigned i, pages;
......
...@@ -51,7 +51,7 @@ void __init setup_apic_routing(void) ...@@ -51,7 +51,7 @@ void __init setup_apic_routing(void)
else else
#endif #endif
if (num_possible_cpus() <= 8) if (max_physical_apicid < 8)
genapic = &apic_flat; genapic = &apic_flat;
else else
genapic = &apic_physflat; genapic = &apic_physflat;
......
#include <linux/kernel.h>
#include <linux/init.h>
#include <asm/setup.h>
#include <asm/bios_ebda.h>
#define BIOS_LOWMEM_KILOBYTES 0x413
/*
* The BIOS places the EBDA/XBDA at the top of conventional
* memory, and usually decreases the reported amount of
* conventional memory (int 0x12) too. This also contains a
* workaround for Dell systems that neglect to reserve EBDA.
* The same workaround also avoids a problem with the AMD768MPX
* chipset: reserve a page before VGA to prevent PCI prefetch
* into it (errata #56). Usually the page is reserved anyways,
* unless you have no PS/2 mouse plugged in.
*/
void __init reserve_ebda_region(void)
{
unsigned int lowmem, ebda_addr;
/* To determine the position of the EBDA and the */
/* end of conventional memory, we need to look at */
/* the BIOS data area. In a paravirtual environment */
/* that area is absent. We'll just have to assume */
/* that the paravirt case can handle memory setup */
/* correctly, without our help. */
if (paravirt_enabled())
return;
/* end of low (conventional) memory */
lowmem = *(unsigned short *)__va(BIOS_LOWMEM_KILOBYTES);
lowmem <<= 10;
/* start of EBDA area */
ebda_addr = get_bios_ebda();
/* Fixup: bios puts an EBDA in the top 64K segment */
/* of conventional memory, but does not adjust lowmem. */
if ((lowmem - ebda_addr) <= 0x10000)
lowmem = ebda_addr;
/* Fixup: bios does not report an EBDA at all. */
/* Some old Dells seem to need 4k anyhow (bugzilla 2990) */
if ((ebda_addr == 0) && (lowmem >= 0x9f000))
lowmem = 0x9f000;
/* Paranoia: should never happen, but... */
if ((lowmem == 0) || (lowmem >= 0x100000))
lowmem = 0x9f000;
/* reserve all memory between lowmem and the 1MB mark */
reserve_early(lowmem, 0x100000, "BIOS reserved");
}
void __init reserve_setup_data(void)
{
struct setup_data *data;
u64 pa_data;
char buf[32];
if (boot_params.hdr.version < 0x0209)
return;
pa_data = boot_params.hdr.setup_data;
while (pa_data) {
data = early_ioremap(pa_data, sizeof(*data));
sprintf(buf, "setup data %x", data->type);
reserve_early(pa_data, pa_data+sizeof(*data)+data->len, buf);
pa_data = data->next;
early_iounmap(data, sizeof(*data));
}
}
...@@ -8,7 +8,34 @@ ...@@ -8,7 +8,34 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/start_kernel.h> #include <linux/start_kernel.h>
#include <asm/setup.h>
#include <asm/sections.h>
#include <asm/e820.h>
#include <asm/bios_ebda.h>
void __init i386_start_kernel(void) void __init i386_start_kernel(void)
{ {
reserve_early(__pa_symbol(&_text), __pa_symbol(&_end), "TEXT DATA BSS");
#ifdef CONFIG_BLK_DEV_INITRD
/* Reserve INITRD */
if (boot_params.hdr.type_of_loader && boot_params.hdr.ramdisk_image) {
u64 ramdisk_image = boot_params.hdr.ramdisk_image;
u64 ramdisk_size = boot_params.hdr.ramdisk_size;
u64 ramdisk_end = ramdisk_image + ramdisk_size;
reserve_early(ramdisk_image, ramdisk_end, "RAMDISK");
}
#endif
reserve_early(init_pg_tables_start, init_pg_tables_end,
"INIT_PG_TABLE");
reserve_ebda_region();
/*
* At this point everything still needed from the boot loader
* or BIOS or kernel text should be early reserved or marked not
* RAM in e820. All other memory is free game.
*/
start_kernel(); start_kernel();
} }
...@@ -51,74 +51,6 @@ static void __init copy_bootdata(char *real_mode_data) ...@@ -51,74 +51,6 @@ static void __init copy_bootdata(char *real_mode_data)
} }
} }
#define BIOS_LOWMEM_KILOBYTES 0x413
/*
* The BIOS places the EBDA/XBDA at the top of conventional
* memory, and usually decreases the reported amount of
* conventional memory (int 0x12) too. This also contains a
* workaround for Dell systems that neglect to reserve EBDA.
* The same workaround also avoids a problem with the AMD768MPX
* chipset: reserve a page before VGA to prevent PCI prefetch
* into it (errata #56). Usually the page is reserved anyways,
* unless you have no PS/2 mouse plugged in.
*/
static void __init reserve_ebda_region(void)
{
unsigned int lowmem, ebda_addr;
/* To determine the position of the EBDA and the */
/* end of conventional memory, we need to look at */
/* the BIOS data area. In a paravirtual environment */
/* that area is absent. We'll just have to assume */
/* that the paravirt case can handle memory setup */
/* correctly, without our help. */
if (paravirt_enabled())
return;
/* end of low (conventional) memory */
lowmem = *(unsigned short *)__va(BIOS_LOWMEM_KILOBYTES);
lowmem <<= 10;
/* start of EBDA area */
ebda_addr = get_bios_ebda();
/* Fixup: bios puts an EBDA in the top 64K segment */
/* of conventional memory, but does not adjust lowmem. */
if ((lowmem - ebda_addr) <= 0x10000)
lowmem = ebda_addr;
/* Fixup: bios does not report an EBDA at all. */
/* Some old Dells seem to need 4k anyhow (bugzilla 2990) */
if ((ebda_addr == 0) && (lowmem >= 0x9f000))
lowmem = 0x9f000;
/* Paranoia: should never happen, but... */
if ((lowmem == 0) || (lowmem >= 0x100000))
lowmem = 0x9f000;
/* reserve all memory between lowmem and the 1MB mark */
reserve_early(lowmem, 0x100000, "BIOS reserved");
}
static void __init reserve_setup_data(void)
{
struct setup_data *data;
unsigned long pa_data;
char buf[32];
if (boot_params.hdr.version < 0x0209)
return;
pa_data = boot_params.hdr.setup_data;
while (pa_data) {
data = early_ioremap(pa_data, sizeof(*data));
sprintf(buf, "setup data %x", data->type);
reserve_early(pa_data, pa_data+sizeof(*data)+data->len, buf);
pa_data = data->next;
early_iounmap(data, sizeof(*data));
}
}
void __init x86_64_start_kernel(char * real_mode_data) void __init x86_64_start_kernel(char * real_mode_data)
{ {
int i; int i;
......
...@@ -194,6 +194,7 @@ default_entry: ...@@ -194,6 +194,7 @@ default_entry:
xorl %ebx,%ebx /* %ebx is kept at zero */ xorl %ebx,%ebx /* %ebx is kept at zero */
movl $pa(pg0), %edi movl $pa(pg0), %edi
movl %edi, pa(init_pg_tables_start)
movl $pa(swapper_pg_pmd), %edx movl $pa(swapper_pg_pmd), %edx
movl $PTE_ATTR, %eax movl $PTE_ATTR, %eax
10: 10:
...@@ -219,6 +220,8 @@ default_entry: ...@@ -219,6 +220,8 @@ default_entry:
jb 10b jb 10b
1: 1:
movl %edi,pa(init_pg_tables_end) movl %edi,pa(init_pg_tables_end)
shrl $12, %eax
movl %eax, pa(max_pfn_mapped)
/* Do early initialization of the fixmap area */ /* Do early initialization of the fixmap area */
movl $pa(swapper_pg_fixmap)+PDE_ATTR,%eax movl $pa(swapper_pg_fixmap)+PDE_ATTR,%eax
...@@ -228,6 +231,7 @@ default_entry: ...@@ -228,6 +231,7 @@ default_entry:
page_pde_offset = (__PAGE_OFFSET >> 20); page_pde_offset = (__PAGE_OFFSET >> 20);
movl $pa(pg0), %edi movl $pa(pg0), %edi
movl %edi, pa(init_pg_tables_start)
movl $pa(swapper_pg_dir), %edx movl $pa(swapper_pg_dir), %edx
movl $PTE_ATTR, %eax movl $PTE_ATTR, %eax
10: 10:
...@@ -249,6 +253,8 @@ page_pde_offset = (__PAGE_OFFSET >> 20); ...@@ -249,6 +253,8 @@ page_pde_offset = (__PAGE_OFFSET >> 20);
cmpl %ebp,%eax cmpl %ebp,%eax
jb 10b jb 10b
movl %edi,pa(init_pg_tables_end) movl %edi,pa(init_pg_tables_end)
shrl $12, %eax
movl %eax, pa(max_pfn_mapped)
/* Do early initialization of the fixmap area */ /* Do early initialization of the fixmap area */
movl $pa(swapper_pg_fixmap)+PDE_ATTR,%eax movl $pa(swapper_pg_fixmap)+PDE_ATTR,%eax
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -31,6 +31,8 @@ ...@@ -31,6 +31,8 @@
#include <asm/numaq.h> #include <asm/numaq.h>
#include <asm/topology.h> #include <asm/topology.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/mpspec.h>
#include <asm/e820.h>
#define MB_TO_PAGES(addr) ((addr) << (20 - PAGE_SHIFT)) #define MB_TO_PAGES(addr) ((addr) << (20 - PAGE_SHIFT))
...@@ -58,6 +60,8 @@ static void __init smp_dump_qct(void) ...@@ -58,6 +60,8 @@ static void __init smp_dump_qct(void)
node_end_pfn[node] = MB_TO_PAGES( node_end_pfn[node] = MB_TO_PAGES(
eq->hi_shrd_mem_start + eq->hi_shrd_mem_size); eq->hi_shrd_mem_start + eq->hi_shrd_mem_size);
e820_register_active_regions(node, node_start_pfn[node],
node_end_pfn[node]);
memory_present(node, memory_present(node,
node_start_pfn[node], node_end_pfn[node]); node_start_pfn[node], node_end_pfn[node]);
node_remap_size[node] = node_memmap_size_bytes(node, node_remap_size[node] = node_memmap_size_bytes(node,
...@@ -67,13 +71,24 @@ static void __init smp_dump_qct(void) ...@@ -67,13 +71,24 @@ static void __init smp_dump_qct(void)
} }
} }
/* static __init void early_check_numaq(void)
* Unlike Summit, we don't really care to let the NUMA-Q {
* fall back to flat mode. Don't compile for NUMA-Q /*
* unless you really need it! * Find possible boot-time SMP configuration:
*/ */
early_find_smp_config();
/*
* get boot-time SMP configuration:
*/
if (smp_found_config)
early_get_smp_config();
}
int __init get_memcfg_numaq(void) int __init get_memcfg_numaq(void)
{ {
early_check_numaq();
if (!found_numaq)
return 0;
smp_dump_qct(); smp_dump_qct();
return 1; return 1;
} }
......
...@@ -17,6 +17,7 @@ unsigned int num_processors; ...@@ -17,6 +17,7 @@ unsigned int num_processors;
unsigned disabled_cpus __cpuinitdata; unsigned disabled_cpus __cpuinitdata;
/* Processor that is doing the boot up */ /* Processor that is doing the boot up */
unsigned int boot_cpu_physical_apicid = -1U; unsigned int boot_cpu_physical_apicid = -1U;
unsigned int max_physical_apicid;
EXPORT_SYMBOL(boot_cpu_physical_apicid); EXPORT_SYMBOL(boot_cpu_physical_apicid);
DEFINE_PER_CPU(u16, x86_cpu_to_apicid) = BAD_APICID; DEFINE_PER_CPU(u16, x86_cpu_to_apicid) = BAD_APICID;
...@@ -137,3 +138,28 @@ void __init setup_per_cpu_areas(void) ...@@ -137,3 +138,28 @@ void __init setup_per_cpu_areas(void)
} }
#endif #endif
void __init parse_setup_data(void)
{
struct setup_data *data;
u64 pa_data;
if (boot_params.hdr.version < 0x0209)
return;
pa_data = boot_params.hdr.setup_data;
while (pa_data) {
data = early_ioremap(pa_data, PAGE_SIZE);
switch (data->type) {
case SETUP_E820_EXT:
parse_e820_ext(data, pa_data);
break;
default:
break;
}
#ifndef CONFIG_DEBUG_BOOT_PARAMS
free_early(pa_data, pa_data+sizeof(*data)+data->len);
#endif
pa_data = data->next;
early_iounmap(data, PAGE_SIZE);
}
}
This diff is collapsed.
This diff is collapsed.
...@@ -554,23 +554,6 @@ cpumask_t cpu_coregroup_map(int cpu) ...@@ -554,23 +554,6 @@ cpumask_t cpu_coregroup_map(int cpu)
return c->llc_shared_map; return c->llc_shared_map;
} }
#ifdef CONFIG_X86_32
/*
* We are called very early to get the low memory for the
* SMP bootup trampoline page.
*/
void __init smp_alloc_memory(void)
{
trampoline_base = alloc_bootmem_low_pages(PAGE_SIZE);
/*
* Has to be in very low memory so we can execute
* real-mode AP code.
*/
if (__pa(trampoline_base) >= 0x9F000)
BUG();
}
#endif
static void impress_friends(void) static void impress_friends(void)
{ {
int cpu; int cpu;
......
This diff is collapsed.
...@@ -36,7 +36,9 @@ static struct rio_table_hdr *rio_table_hdr __initdata; ...@@ -36,7 +36,9 @@ static struct rio_table_hdr *rio_table_hdr __initdata;
static struct scal_detail *scal_devs[MAX_NUMNODES] __initdata; static struct scal_detail *scal_devs[MAX_NUMNODES] __initdata;
static struct rio_detail *rio_devs[MAX_NUMNODES*4] __initdata; static struct rio_detail *rio_devs[MAX_NUMNODES*4] __initdata;
#ifndef CONFIG_X86_NUMAQ
static int mp_bus_id_to_node[MAX_MP_BUSSES] __initdata; static int mp_bus_id_to_node[MAX_MP_BUSSES] __initdata;
#endif
static int __init setup_pci_node_map_for_wpeg(int wpeg_num, int last_bus) static int __init setup_pci_node_map_for_wpeg(int wpeg_num, int last_bus)
{ {
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
#include <asm/trampoline.h> #include <asm/trampoline.h>
/* ready for x86_64, no harm for x86, since it will overwrite after alloc */ /* ready for x86_64 and x86 */
unsigned char *trampoline_base = __va(TRAMPOLINE_BASE); unsigned char *trampoline_base = __va(TRAMPOLINE_BASE);
/* /*
......
...@@ -835,7 +835,7 @@ static __init char *lguest_memory_setup(void) ...@@ -835,7 +835,7 @@ static __init char *lguest_memory_setup(void)
/* The Linux bootloader header contains an "e820" memory map: the /* The Linux bootloader header contains an "e820" memory map: the
* Launcher populated the first entry with our memory limit. */ * Launcher populated the first entry with our memory limit. */
add_memory_region(boot_params.e820_map[0].addr, e820_add_region(boot_params.e820_map[0].addr,
boot_params.e820_map[0].size, boot_params.e820_map[0].size,
boot_params.e820_map[0].type); boot_params.e820_map[0].type);
...@@ -1012,6 +1012,7 @@ __init void lguest_init(void) ...@@ -1012,6 +1012,7 @@ __init void lguest_init(void)
* clobbered. The Launcher places our initial pagetables somewhere at * clobbered. The Launcher places our initial pagetables somewhere at
* the top of our physical memory, so we don't need extra space: set * the top of our physical memory, so we don't need extra space: set
* init_pg_tables_end to the end of the kernel. */ * init_pg_tables_end to the end of the kernel. */
init_pg_tables_start = __pa(pg0);
init_pg_tables_end = __pa(pg0); init_pg_tables_end = __pa(pg0);
/* Load the %fs segment register (the per-cpu segment register) with /* Load the %fs segment register (the per-cpu segment register) with
...@@ -1065,9 +1066,9 @@ __init void lguest_init(void) ...@@ -1065,9 +1066,9 @@ __init void lguest_init(void)
pm_power_off = lguest_power_off; pm_power_off = lguest_power_off;
machine_ops.restart = lguest_restart; machine_ops.restart = lguest_restart;
/* Now we're set up, call start_kernel() in init/main.c and we proceed /* Now we're set up, call i386_start_kernel() in head32.c and we proceed
* to boot as normal. It never returns. */ * to boot as normal. It never returns. */
start_kernel(); i386_start_kernel();
} }
/* /*
* This marks the end of stage II of our journey, The Guest. * This marks the end of stage II of our journey, The Guest.
......
This diff is collapsed.
...@@ -3,4 +3,3 @@ ...@@ -3,4 +3,3 @@
# #
obj-$(CONFIG_X86_ES7000) := es7000plat.o obj-$(CONFIG_X86_ES7000) := es7000plat.o
obj-$(CONFIG_X86_GENERICARCH) := es7000plat.o
This diff is collapsed.
...@@ -2,7 +2,11 @@ ...@@ -2,7 +2,11 @@
# Makefile for the generic architecture # Makefile for the generic architecture
# #
EXTRA_CFLAGS := -Iarch/x86/kernel EXTRA_CFLAGS := -Iarch/x86/kernel
obj-y := probe.o summit.o bigsmp.o es7000.o default.o obj-y := probe.o default.o
obj-y += ../../x86/mach-es7000/ obj-$(CONFIG_X86_NUMAQ) += numaq.o
obj-$(CONFIG_X86_SUMMIT) += summit.o
obj-$(CONFIG_X86_BIGSMP) += bigsmp.o
obj-$(CONFIG_X86_ES7000) += es7000.o
obj-$(CONFIG_X86_ES7000) += ../../x86/mach-es7000/
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -14,4 +14,6 @@ static inline unsigned int get_bios_ebda(void) ...@@ -14,4 +14,6 @@ static inline unsigned int get_bios_ebda(void)
return address; /* 0 means none */ return address; /* 0 means none */
} }
void reserve_ebda_region(void);
#endif /* _MACH_BIOS_EBDA_H */ #endif /* _MACH_BIOS_EBDA_H */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment