Commit 34bf5d0f authored by Ingo Molnar's avatar Ingo Molnar

Merge branch 'x86/core' into x86/cleanups

parents bd8b96df 79a66b96
...@@ -244,18 +244,6 @@ Who: Michael Buesch <mb@bu3sch.de> ...@@ -244,18 +244,6 @@ Who: Michael Buesch <mb@bu3sch.de>
--------------------------- ---------------------------
What: init_mm export
When: 2.6.26
Why: Not used in-tree. The current out-of-tree users used it to
work around problems in the CPA code which should be resolved
by now. One usecase was described to provide verification code
of the CPA operation. That's a good idea in general, but such
code / infrastructure should be in the kernel and not in some
out-of-tree driver.
Who: Thomas Gleixner <tglx@linutronix.de>
----------------------------
What: usedac i386 kernel parameter What: usedac i386 kernel parameter
When: 2.6.27 When: 2.6.27
Why: replaced by allowdac and no dac combination Why: replaced by allowdac and no dac combination
......
...@@ -1339,10 +1339,13 @@ nmi_watchdog ...@@ -1339,10 +1339,13 @@ nmi_watchdog
Enables/Disables the NMI watchdog on x86 systems. When the value is non-zero Enables/Disables the NMI watchdog on x86 systems. When the value is non-zero
the NMI watchdog is enabled and will continuously test all online cpus to the NMI watchdog is enabled and will continuously test all online cpus to
determine whether or not they are still functioning properly. determine whether or not they are still functioning properly. Currently,
passing "nmi_watchdog=" parameter at boot time is required for this function
to work.
Because the NMI watchdog shares registers with oprofile, by disabling the NMI If LAPIC NMI watchdog method is in use (nmi_watchdog=2 kernel parameter), the
watchdog, oprofile may have more registers to utilize. NMI watchdog shares registers with oprofile. By disabling the NMI watchdog,
oprofile may have more registers to utilize.
msgmni msgmni
------ ------
......
...@@ -220,14 +220,17 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -220,14 +220,17 @@ and is between 256 and 4096 characters. It is defined in the file
Bits in debug_level correspond to a level in Bits in debug_level correspond to a level in
ACPI_DEBUG_PRINT statements, e.g., ACPI_DEBUG_PRINT statements, e.g.,
ACPI_DEBUG_PRINT((ACPI_DB_INFO, ... ACPI_DEBUG_PRINT((ACPI_DB_INFO, ...
See Documentation/acpi/debug.txt for more information The debug_level mask defaults to "info". See
about debug layers and levels. Documentation/acpi/debug.txt for more information about
debug layers and levels.
Enable processor driver info messages:
acpi.debug_layer=0x20000000
Enable PCI/PCI interrupt routing info messages:
acpi.debug_layer=0x400000
Enable AML "Debug" output, i.e., stores to the Debug Enable AML "Debug" output, i.e., stores to the Debug
object while interpreting AML: object while interpreting AML:
acpi.debug_layer=0xffffffff acpi.debug_level=0x2 acpi.debug_layer=0xffffffff acpi.debug_level=0x2
Enable PCI/PCI interrupt routing info messages:
acpi.debug_layer=0x400000 acpi.debug_level=0x4
Enable all messages related to ACPI hardware: Enable all messages related to ACPI hardware:
acpi.debug_layer=0x2 acpi.debug_level=0xffffffff acpi.debug_layer=0x2 acpi.debug_level=0xffffffff
...@@ -1393,7 +1396,20 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -1393,7 +1396,20 @@ and is between 256 and 4096 characters. It is defined in the file
when a NMI is triggered. when a NMI is triggered.
Format: [state][,regs][,debounce][,die] Format: [state][,regs][,debounce][,die]
nmi_watchdog= [KNL,BUGS=X86-32] Debugging features for SMP kernels nmi_watchdog= [KNL,BUGS=X86-32,X86-64] Debugging features for SMP kernels
Format: [panic,][num]
Valid num: 0,1,2
0 - turn nmi_watchdog off
1 - use the IO-APIC timer for the NMI watchdog
2 - use the local APIC for the NMI watchdog using
a performance counter. Note: This will use one performance
counter and the local APIC's performance vector.
When panic is specified panic when an NMI watchdog timeout occurs.
This is useful when you use a panic=... timeout and need the box
quickly up again.
Instead of 1 and 2 it is possible to use the following
symbolic names: lapic and ioapic
Example: nmi_watchdog=2 or nmi_watchdog=panic,lapic
no387 [BUGS=X86-32] Tells the kernel to use the 387 maths no387 [BUGS=X86-32] Tells the kernel to use the 387 maths
emulation library even if a 387 maths coprocessor emulation library even if a 387 maths coprocessor
...@@ -1626,6 +1642,17 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -1626,6 +1642,17 @@ and is between 256 and 4096 characters. It is defined in the file
nomsi [MSI] If the PCI_MSI kernel config parameter is nomsi [MSI] If the PCI_MSI kernel config parameter is
enabled, this kernel boot option can be used to enabled, this kernel boot option can be used to
disable the use of MSI interrupts system-wide. disable the use of MSI interrupts system-wide.
noioapicquirk [APIC] Disable all boot interrupt quirks.
Safety option to keep boot IRQs enabled. This
should never be necessary.
ioapicreroute [APIC] Enable rerouting of boot IRQs to the
primary IO-APIC for bridges that cannot disable
boot IRQs. This fixes a source of spurious IRQs
when the system masks IRQs.
noioapicreroute [APIC] Disable workaround that uses the
boot IRQ equivalent of an IRQ that connects to
a chipset where boot IRQs cannot be disabled.
The opposite of ioapicreroute.
biosirq [X86-32] Use PCI BIOS calls to get the interrupt biosirq [X86-32] Use PCI BIOS calls to get the interrupt
routing table. These calls are known to be buggy routing table. These calls are known to be buggy
on several machines and they hang the machine on several machines and they hang the machine
...@@ -2255,6 +2282,13 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -2255,6 +2282,13 @@ and is between 256 and 4096 characters. It is defined in the file
Format: Format:
<io>,<irq>,<dma>,<dma2>,<sb_io>,<sb_irq>,<sb_dma>,<mpu_io>,<mpu_irq> <io>,<irq>,<dma>,<dma2>,<sb_io>,<sb_irq>,<sb_dma>,<mpu_io>,<mpu_irq>
tsc= Disable clocksource-must-verify flag for TSC.
Format: <string>
[x86] reliable: mark tsc clocksource as reliable, this
disables clocksource verification at runtime.
Used to enable high-resolution timer mode on older
hardware, and in virtualized environment.
turbografx.map[2|3]= [HW,JOY] turbografx.map[2|3]= [HW,JOY]
TurboGraFX parallel port interface TurboGraFX parallel port interface
Format: Format:
......
...@@ -69,6 +69,11 @@ to the overall system performance. ...@@ -69,6 +69,11 @@ to the overall system performance.
On x86 nmi_watchdog is disabled by default so you have to enable it with On x86 nmi_watchdog is disabled by default so you have to enable it with
a boot time parameter. a boot time parameter.
It's possible to disable the NMI watchdog in run-time by writing "0" to
/proc/sys/kernel/nmi_watchdog. Writing "1" to the same file will re-enable
the NMI watchdog. Notice that you still need to use "nmi_watchdog=" parameter
at boot time.
NOTE: In kernels prior to 2.4.2-ac18 the NMI-oopser is enabled unconditionally NOTE: In kernels prior to 2.4.2-ac18 the NMI-oopser is enabled unconditionally
on x86 SMP boxes. on x86 SMP boxes.
......
...@@ -1063,6 +1063,7 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed. ...@@ -1063,6 +1063,7 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
STAC9227/9228/9229/927x STAC9227/9228/9229/927x
ref Reference board ref Reference board
ref-no-jd Reference board without HP/Mic jack detection
3stack D965 3stack 3stack D965 3stack
5stack D965 5stack + SPDIF 5stack D965 5stack + SPDIF
dell-3stack Dell Dimension E520 dell-3stack Dell Dimension E520
...@@ -1076,6 +1077,7 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed. ...@@ -1076,6 +1077,7 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
STAC92HD73* STAC92HD73*
ref Reference board ref Reference board
no-jd BIOS setup but without jack-detection
dell-m6-amic Dell desktops/laptops with analog mics dell-m6-amic Dell desktops/laptops with analog mics
dell-m6-dmic Dell desktops/laptops with digital mics dell-m6-dmic Dell desktops/laptops with digital mics
dell-m6 Dell desktops/laptops with both type of mics dell-m6 Dell desktops/laptops with both type of mics
......
...@@ -349,7 +349,7 @@ Protocol: 2.00+ ...@@ -349,7 +349,7 @@ Protocol: 2.00+
3 SYSLINUX 3 SYSLINUX
4 EtherBoot 4 EtherBoot
5 ELILO 5 ELILO
7 GRuB 7 GRUB
8 U-BOOT 8 U-BOOT
9 Xen 9 Xen
A Gujin A Gujin
...@@ -537,8 +537,8 @@ Type: read ...@@ -537,8 +537,8 @@ Type: read
Offset/size: 0x248/4 Offset/size: 0x248/4
Protocol: 2.08+ Protocol: 2.08+
If non-zero then this field contains the offset from the end of the If non-zero then this field contains the offset from the beginning
real-mode code to the payload. of the protected-mode code to the payload.
The payload may be compressed. The format of both the compressed and The payload may be compressed. The format of both the compressed and
uncompressed data should be determined using the standard magic uncompressed data should be determined using the standard magic
......
...@@ -80,6 +80,30 @@ pci proc | -- | -- | WC | ...@@ -80,6 +80,30 @@ pci proc | -- | -- | WC |
| | | | | | | |
------------------------------------------------------------------- -------------------------------------------------------------------
Advanced APIs for drivers
-------------------------
A. Exporting pages to users with remap_pfn_range, io_remap_pfn_range,
vm_insert_pfn
Drivers wanting to export some pages to userspace do it by using mmap
interface and a combination of
1) pgprot_noncached()
2) io_remap_pfn_range() or remap_pfn_range() or vm_insert_pfn()
With PAT support, a new API pgprot_writecombine is being added. So, drivers can
continue to use the above sequence, with either pgprot_noncached() or
pgprot_writecombine() in step 1, followed by step 2.
In addition, step 2 internally tracks the region as UC or WC in memtype
list in order to ensure no conflicting mapping.
Note that this set of APIs only works with IO (non RAM) regions. If driver
wants to export a RAM region, it has to do set_memory_uc() or set_memory_wc()
as step 0 above and also track the usage of those pages and use set_memory_wb()
before the page is freed to free pool.
Notes: Notes:
-- in the above table mean "Not suggested usage for the API". Some of the --'s -- in the above table mean "Not suggested usage for the API". Some of the --'s
......
...@@ -79,17 +79,6 @@ Timing ...@@ -79,17 +79,6 @@ Timing
Report when timer interrupts are lost because some code turned off Report when timer interrupts are lost because some code turned off
interrupts for too long. interrupts for too long.
nmi_watchdog=NUMBER[,panic]
NUMBER can be:
0 don't use an NMI watchdog
1 use the IO-APIC timer for the NMI watchdog
2 use the local APIC for the NMI watchdog using a performance counter. Note
This will use one performance counter and the local APIC's performance
vector.
When panic is specified panic when an NMI watchdog timeout occurs.
This is useful when you use a panic=... timeout and need the box
quickly up again.
nohpet nohpet
Don't use the HPET timer. Don't use the HPET timer.
......
...@@ -6,7 +6,7 @@ Virtual memory map with 4 level page tables: ...@@ -6,7 +6,7 @@ Virtual memory map with 4 level page tables:
0000000000000000 - 00007fffffffffff (=47 bits) user space, different per mm 0000000000000000 - 00007fffffffffff (=47 bits) user space, different per mm
hole caused by [48:63] sign extension hole caused by [48:63] sign extension
ffff800000000000 - ffff80ffffffffff (=40 bits) guard hole ffff800000000000 - ffff80ffffffffff (=40 bits) guard hole
ffff810000000000 - ffffc0ffffffffff (=46 bits) direct mapping of all phys. memory ffff880000000000 - ffffc0ffffffffff (=57 TB) direct mapping of all phys. memory
ffffc10000000000 - ffffc1ffffffffff (=40 bits) hole ffffc10000000000 - ffffc1ffffffffff (=40 bits) hole
ffffc20000000000 - ffffe1ffffffffff (=45 bits) vmalloc/ioremap space ffffc20000000000 - ffffe1ffffffffff (=45 bits) vmalloc/ioremap space
ffffe20000000000 - ffffe2ffffffffff (=40 bits) virtual memory map (1TB) ffffe20000000000 - ffffe2ffffffffff (=40 bits) virtual memory map (1TB)
......
...@@ -2191,9 +2191,9 @@ S: Supported ...@@ -2191,9 +2191,9 @@ S: Supported
INOTIFY INOTIFY
P: John McCutchan P: John McCutchan
M: ttb@tentacle.dhs.org M: john@johnmccutchan.com
P: Robert Love P: Robert Love
M: rml@novell.com M: rlove@rlove.org
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
S: Maintained S: Maintained
...@@ -4529,7 +4529,7 @@ S: Maintained ...@@ -4529,7 +4529,7 @@ S: Maintained
USB VIDEO CLASS USB VIDEO CLASS
P: Laurent Pinchart P: Laurent Pinchart
M: laurent.pinchart@skynet.be M: laurent.pinchart@skynet.be
L: linux-uvc-devel@lists.berlios.de L: linux-uvc-devel@lists.berlios.de (subscribers-only)
L: video4linux-list@redhat.com L: video4linux-list@redhat.com
W: http://linux-uvc.berlios.de W: http://linux-uvc.berlios.de
S: Maintained S: Maintained
......
VERSION = 2 VERSION = 2
PATCHLEVEL = 6 PATCHLEVEL = 6
SUBLEVEL = 28 SUBLEVEL = 28
EXTRAVERSION = -rc8 EXTRAVERSION =
NAME = Erotic Pickled Herring NAME = Erotic Pickled Herring
# *DOCUMENTATION* # *DOCUMENTATION*
......
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
#include <linux/mtd/partitions.h> #include <linux/mtd/partitions.h>
#include <linux/mtd/physmap.h> #include <linux/mtd/physmap.h>
#include <asm/arch/smc.h> #include <mach/smc.h>
static struct smc_timing flash_timing __initdata = { static struct smc_timing flash_timing __initdata = {
.ncs_read_setup = 0, .ncs_read_setup = 0,
......
...@@ -25,10 +25,10 @@ ...@@ -25,10 +25,10 @@
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/arch/at32ap700x.h> #include <mach/at32ap700x.h>
#include <asm/arch/init.h> #include <mach/init.h>
#include <asm/arch/board.h> #include <mach/board.h>
#include <asm/arch/portmux.h> #include <mach/portmux.h>
/* Oscillator frequencies. These are board-specific */ /* Oscillator frequencies. These are board-specific */
unsigned long at32_board_osc_rates[3] = { unsigned long at32_board_osc_rates[3] = {
......
...@@ -10,7 +10,7 @@ MKIMAGE := $(srctree)/scripts/mkuboot.sh ...@@ -10,7 +10,7 @@ MKIMAGE := $(srctree)/scripts/mkuboot.sh
extra-y := vmlinux.bin vmlinux.gz extra-y := vmlinux.bin vmlinux.gz
OBJCOPYFLAGS_vmlinux.bin := -O binary OBJCOPYFLAGS_vmlinux.bin := -O binary -R .note.gnu.build-id
$(obj)/vmlinux.bin: vmlinux FORCE $(obj)/vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy) $(call if_changed,objcopy)
......
This diff is collapsed.
...@@ -967,28 +967,28 @@ static inline void configure_usart0_pins(void) ...@@ -967,28 +967,28 @@ static inline void configure_usart0_pins(void)
{ {
u32 pin_mask = (1 << 8) | (1 << 9); /* RXD & TXD */ u32 pin_mask = (1 << 8) | (1 << 9); /* RXD & TXD */
select_peripheral(PIOA, pin_mask, PERIPH_B, 0); select_peripheral(PIOA, pin_mask, PERIPH_B, AT32_GPIOF_PULLUP);
} }
static inline void configure_usart1_pins(void) static inline void configure_usart1_pins(void)
{ {
u32 pin_mask = (1 << 17) | (1 << 18); /* RXD & TXD */ u32 pin_mask = (1 << 17) | (1 << 18); /* RXD & TXD */
select_peripheral(PIOA, pin_mask, PERIPH_A, 0); select_peripheral(PIOA, pin_mask, PERIPH_A, AT32_GPIOF_PULLUP);
} }
static inline void configure_usart2_pins(void) static inline void configure_usart2_pins(void)
{ {
u32 pin_mask = (1 << 26) | (1 << 27); /* RXD & TXD */ u32 pin_mask = (1 << 26) | (1 << 27); /* RXD & TXD */
select_peripheral(PIOB, pin_mask, PERIPH_B, 0); select_peripheral(PIOB, pin_mask, PERIPH_B, AT32_GPIOF_PULLUP);
} }
static inline void configure_usart3_pins(void) static inline void configure_usart3_pins(void)
{ {
u32 pin_mask = (1 << 18) | (1 << 17); /* RXD & TXD */ u32 pin_mask = (1 << 18) | (1 << 17); /* RXD & TXD */
select_peripheral(PIOB, pin_mask, PERIPH_B, 0); select_peripheral(PIOB, pin_mask, PERIPH_B, AT32_GPIOF_PULLUP);
} }
static struct platform_device *__initdata at32_usarts[4]; static struct platform_device *__initdata at32_usarts[4];
......
...@@ -50,9 +50,8 @@ static inline __attribute_const__ __u32 __arch_swab32(__u32 x) ...@@ -50,9 +50,8 @@ static inline __attribute_const__ __u32 __arch_swab32(__u32 x)
static inline __attribute_const__ __u64 __arch_swab64(__u64 x) static inline __attribute_const__ __u64 __arch_swab64(__u64 x)
{ {
__asm__( __asm__(
" dsbh %0, %1 \n" " dsbh %0, %1\n"
" dshd %0, %0 \n" " dshd %0, %0"
" drotr %0, %0, 32 \n"
: "=r" (x) : "=r" (x)
: "r" (x)); : "r" (x));
......
...@@ -232,7 +232,7 @@ typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG]; ...@@ -232,7 +232,7 @@ typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
*/ */
#ifdef __MIPSEB__ #ifdef __MIPSEB__
#define ELF_DATA ELFDATA2MSB #define ELF_DATA ELFDATA2MSB
#elif __MIPSEL__ #elif defined(__MIPSEL__)
#define ELF_DATA ELFDATA2LSB #define ELF_DATA ELFDATA2LSB
#endif #endif
#define ELF_ARCH EM_MIPS #define ELF_ARCH EM_MIPS
......
...@@ -44,9 +44,12 @@ static inline void flush_tlb_mm(struct mm_struct *mm) ...@@ -44,9 +44,12 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
{ {
BUG_ON(mm == &init_mm); /* Should never happen */ BUG_ON(mm == &init_mm); /* Should never happen */
#ifdef CONFIG_SMP #if 1 || defined(CONFIG_SMP)
flush_tlb_all(); flush_tlb_all();
#else #else
/* FIXME: currently broken, causing space id and protection ids
* to go out of sync, resulting in faults on userspace accesses.
*/
if (mm) { if (mm) {
if (mm->context != 0) if (mm->context != 0)
free_sid(mm->context); free_sid(mm->context);
......
...@@ -62,6 +62,8 @@ struct sparc_stackf { ...@@ -62,6 +62,8 @@ struct sparc_stackf {
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <asm/system.h>
static inline bool pt_regs_is_syscall(struct pt_regs *regs) static inline bool pt_regs_is_syscall(struct pt_regs *regs)
{ {
return (regs->psr & PSR_SYSCALL); return (regs->psr & PSR_SYSCALL);
...@@ -72,6 +74,14 @@ static inline bool pt_regs_clear_syscall(struct pt_regs *regs) ...@@ -72,6 +74,14 @@ static inline bool pt_regs_clear_syscall(struct pt_regs *regs)
return (regs->psr &= ~PSR_SYSCALL); return (regs->psr &= ~PSR_SYSCALL);
} }
#define arch_ptrace_stop_needed(exit_code, info) \
({ flush_user_windows(); \
current_thread_info()->w_saved != 0; \
})
#define arch_ptrace_stop(exit_code, info) \
synchronize_user_stack()
#define user_mode(regs) (!((regs)->psr & PSR_PS)) #define user_mode(regs) (!((regs)->psr & PSR_PS))
#define instruction_pointer(regs) ((regs)->pc) #define instruction_pointer(regs) ((regs)->pc)
#define user_stack_pointer(regs) ((regs)->u_regs[UREG_FP]) #define user_stack_pointer(regs) ((regs)->u_regs[UREG_FP])
......
...@@ -114,6 +114,7 @@ struct sparc_trapf { ...@@ -114,6 +114,7 @@ struct sparc_trapf {
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <linux/threads.h> #include <linux/threads.h>
#include <asm/system.h>
static inline int pt_regs_trap_type(struct pt_regs *regs) static inline int pt_regs_trap_type(struct pt_regs *regs)
{ {
...@@ -130,6 +131,14 @@ static inline bool pt_regs_clear_syscall(struct pt_regs *regs) ...@@ -130,6 +131,14 @@ static inline bool pt_regs_clear_syscall(struct pt_regs *regs)
return (regs->tstate &= ~TSTATE_SYSCALL); return (regs->tstate &= ~TSTATE_SYSCALL);
} }
#define arch_ptrace_stop_needed(exit_code, info) \
({ flush_user_windows(); \
get_thread_wsaved() != 0; \
})
#define arch_ptrace_stop(exit_code, info) \
synchronize_user_stack()
struct global_reg_snapshot { struct global_reg_snapshot {
unsigned long tstate; unsigned long tstate;
unsigned long tpc; unsigned long tpc;
......
...@@ -19,6 +19,8 @@ config X86_64 ...@@ -19,6 +19,8 @@ config X86_64
config X86 config X86
def_bool y def_bool y
select HAVE_AOUT if X86_32 select HAVE_AOUT if X86_32
select HAVE_READQ
select HAVE_WRITEQ
select HAVE_UNSTABLE_SCHED_CLOCK select HAVE_UNSTABLE_SCHED_CLOCK
select HAVE_IDE select HAVE_IDE
select HAVE_OPROFILE select HAVE_OPROFILE
...@@ -87,6 +89,10 @@ config GENERIC_IOMAP ...@@ -87,6 +89,10 @@ config GENERIC_IOMAP
config GENERIC_BUG config GENERIC_BUG
def_bool y def_bool y
depends on BUG depends on BUG
select GENERIC_BUG_RELATIVE_POINTERS if X86_64
config GENERIC_BUG_RELATIVE_POINTERS
bool
config GENERIC_HWEIGHT config GENERIC_HWEIGHT
def_bool y def_bool y
...@@ -457,10 +463,6 @@ config X86_CYCLONE_TIMER ...@@ -457,10 +463,6 @@ config X86_CYCLONE_TIMER
def_bool y def_bool y
depends on X86_GENERICARCH depends on X86_GENERICARCH
config ES7000_CLUSTERED_APIC
def_bool y
depends on SMP && X86_ES7000 && MPENTIUMIII
source "arch/x86/Kconfig.cpu" source "arch/x86/Kconfig.cpu"
config HPET_TIMER config HPET_TIMER
...@@ -561,7 +563,7 @@ config AMD_IOMMU ...@@ -561,7 +563,7 @@ config AMD_IOMMU
# need this always selected by IOMMU for the VIA workaround # need this always selected by IOMMU for the VIA workaround
config SWIOTLB config SWIOTLB
bool def_bool y if X86_64
help help
Support for software bounce buffers used on x86-64 systems Support for software bounce buffers used on x86-64 systems
which don't have a hardware IOMMU (e.g. the current generation which don't have a hardware IOMMU (e.g. the current generation
...@@ -652,6 +654,30 @@ config X86_VISWS_APIC ...@@ -652,6 +654,30 @@ config X86_VISWS_APIC
def_bool y def_bool y
depends on X86_32 && X86_VISWS depends on X86_32 && X86_VISWS
config X86_REROUTE_FOR_BROKEN_BOOT_IRQS
bool "Reroute for broken boot IRQs"
default n
depends on X86_IO_APIC
help
This option enables a workaround that fixes a source of
spurious interrupts. This is recommended when threaded
interrupt handling is used on systems where the generation of
superfluous "boot interrupts" cannot be disabled.
Some chipsets generate a legacy INTx "boot IRQ" when the IRQ
entry in the chipset's IO-APIC is masked (as, e.g. the RT
kernel does during interrupt handling). On chipsets where this
boot IRQ generation cannot be disabled, this workaround keeps
the original IRQ line masked so that only the equivalent "boot
IRQ" is delivered to the CPUs. The workaround also tells the
kernel to set up the IRQ handler on the boot IRQ line. In this
way only one interrupt is delivered to the kernel. Otherwise
the spurious second interrupt may cause the kernel to bring
down (vital) interrupt lines.
Only affects "broken" chipsets. Interrupt sharing may be
increased on these systems.
config X86_MCE config X86_MCE
bool "Machine Check Exception" bool "Machine Check Exception"
depends on !X86_VOYAGER depends on !X86_VOYAGER
...@@ -948,24 +974,37 @@ config X86_PAE ...@@ -948,24 +974,37 @@ config X86_PAE
config ARCH_PHYS_ADDR_T_64BIT config ARCH_PHYS_ADDR_T_64BIT
def_bool X86_64 || X86_PAE def_bool X86_64 || X86_PAE
config DIRECT_GBPAGES
bool "Enable 1GB pages for kernel pagetables" if EMBEDDED
default y
depends on X86_64
help
Allow the kernel linear mapping to use 1GB pages on CPUs that
support it. This can improve the kernel's performance a tiny bit by
reducing TLB pressure. If in doubt, say "Y".
# Common NUMA Features # Common NUMA Features
config NUMA config NUMA
bool "Numa Memory Allocation and Scheduler Support (EXPERIMENTAL)" bool "Numa Memory Allocation and Scheduler Support"
depends on SMP depends on SMP
depends on X86_64 || (X86_32 && HIGHMEM64G && (X86_NUMAQ || X86_BIGSMP || X86_SUMMIT && ACPI) && EXPERIMENTAL) depends on X86_64 || (X86_32 && HIGHMEM64G && (X86_NUMAQ || X86_BIGSMP || X86_SUMMIT && ACPI) && EXPERIMENTAL)
default n if X86_PC default n if X86_PC
default y if (X86_NUMAQ || X86_SUMMIT || X86_BIGSMP) default y if (X86_NUMAQ || X86_SUMMIT || X86_BIGSMP)
help help
Enable NUMA (Non Uniform Memory Access) support. Enable NUMA (Non Uniform Memory Access) support.
The kernel will try to allocate memory used by a CPU on the The kernel will try to allocate memory used by a CPU on the
local memory controller of the CPU and add some more local memory controller of the CPU and add some more
NUMA awareness to the kernel. NUMA awareness to the kernel.
For 32-bit this is currently highly experimental and should be only For 64-bit this is recommended if the system is Intel Core i7
used for kernel development. It might also cause boot failures. (or later), AMD Opteron, or EM64T NUMA.
For 64-bit this is recommended on all multiprocessor Opteron systems.
If the system is EM64T, you should say N unless your system is For 32-bit this is only needed on (rare) 32-bit-only platforms
EM64T NUMA. that support NUMA topologies, such as NUMAQ / Summit, or if you
boot a 32-bit kernel on a 64-bit NUMA platform.
Otherwise, you should say N.
comment "NUMA (Summit) requires SMP, 64GB highmem support, ACPI" comment "NUMA (Summit) requires SMP, 64GB highmem support, ACPI"
depends on X86_32 && X86_SUMMIT && (!HIGHMEM64G || !ACPI) depends on X86_32 && X86_SUMMIT && (!HIGHMEM64G || !ACPI)
...@@ -1485,6 +1524,10 @@ config ARCH_ENABLE_MEMORY_HOTPLUG ...@@ -1485,6 +1524,10 @@ config ARCH_ENABLE_MEMORY_HOTPLUG
def_bool y def_bool y
depends on X86_64 || (X86_32 && HIGHMEM) depends on X86_64 || (X86_32 && HIGHMEM)
config ARCH_ENABLE_MEMORY_HOTREMOVE
def_bool y
depends on MEMORY_HOTPLUG
config HAVE_ARCH_EARLY_PFN_TO_NID config HAVE_ARCH_EARLY_PFN_TO_NID
def_bool X86_64 def_bool X86_64
depends on NUMA depends on NUMA
...@@ -1624,13 +1667,6 @@ config APM_ALLOW_INTS ...@@ -1624,13 +1667,6 @@ config APM_ALLOW_INTS
many of the newer IBM Thinkpads. If you experience hangs when you many of the newer IBM Thinkpads. If you experience hangs when you
suspend, try setting this to Y. Otherwise, say N. suspend, try setting this to Y. Otherwise, say N.
config APM_REAL_MODE_POWER_OFF
bool "Use real mode APM BIOS call to power off"
help
Use real mode APM BIOS calls to switch off the computer. This is
a work-around for a number of buggy BIOSes. Switch this option on if
your computer crashes instead of powering off properly.
endif # APM endif # APM
source "arch/x86/kernel/cpu/cpufreq/Kconfig" source "arch/x86/kernel/cpu/cpufreq/Kconfig"
......
...@@ -520,6 +520,7 @@ config X86_PTRACE_BTS ...@@ -520,6 +520,7 @@ config X86_PTRACE_BTS
bool "Branch Trace Store" bool "Branch Trace Store"
default y default y
depends on X86_DEBUGCTLMSR depends on X86_DEBUGCTLMSR
depends on BROKEN
help help
This adds a ptrace interface to the hardware's branch trace store. This adds a ptrace interface to the hardware's branch trace store.
......
...@@ -114,18 +114,6 @@ config DEBUG_RODATA ...@@ -114,18 +114,6 @@ config DEBUG_RODATA
data. This is recommended so that we can catch kernel bugs sooner. data. This is recommended so that we can catch kernel bugs sooner.
If in doubt, say "Y". If in doubt, say "Y".
config DIRECT_GBPAGES
bool "Enable gbpages-mapped kernel pagetables"
depends on DEBUG_KERNEL && EXPERIMENTAL && X86_64
help
Enable gigabyte pages support (if the CPU supports it). This can
improve the kernel's performance a tiny bit by reducing TLB
pressure.
This is experimental code.
If in doubt, say "N".
config DEBUG_RODATA_TEST config DEBUG_RODATA_TEST
bool "Testcase for the DEBUG_RODATA feature" bool "Testcase for the DEBUG_RODATA feature"
depends on DEBUG_RODATA depends on DEBUG_RODATA
...@@ -307,10 +295,10 @@ config OPTIMIZE_INLINING ...@@ -307,10 +295,10 @@ config OPTIMIZE_INLINING
developers have marked 'inline'. Doing so takes away freedom from gcc to developers have marked 'inline'. Doing so takes away freedom from gcc to
do what it thinks is best, which is desirable for the gcc 3.x series of do what it thinks is best, which is desirable for the gcc 3.x series of
compilers. The gcc 4.x series have a rewritten inlining algorithm and compilers. The gcc 4.x series have a rewritten inlining algorithm and
disabling this option will generate a smaller kernel there. Hopefully enabling this option will generate a smaller kernel there. Hopefully
this algorithm is so good that allowing gcc4 to make the decision can this algorithm is so good that allowing gcc 4.x and above to make the
become the default in the future, until then this option is there to decision will become the default in the future. Until then this option
test gcc for this. is there to test gcc for this.
If unsure, say N. If unsure, say N.
......
...@@ -34,7 +34,7 @@ static struct mode_info cga_modes[] = { ...@@ -34,7 +34,7 @@ static struct mode_info cga_modes[] = {
{ VIDEO_80x25, 80, 25, 0 }, { VIDEO_80x25, 80, 25, 0 },
}; };
__videocard video_vga; static __videocard video_vga;
/* Set basic 80x25 mode */ /* Set basic 80x25 mode */
static u8 vga_set_basic_mode(void) static u8 vga_set_basic_mode(void)
...@@ -259,7 +259,7 @@ static int vga_probe(void) ...@@ -259,7 +259,7 @@ static int vga_probe(void)
return mode_count[adapter]; return mode_count[adapter];
} }
__videocard video_vga = { static __videocard video_vga = {
.card_name = "VGA", .card_name = "VGA",
.probe = vga_probe, .probe = vga_probe,
.set_mode = vga_set_mode, .set_mode = vga_set_mode,
......
...@@ -226,7 +226,7 @@ static unsigned int mode_menu(void) ...@@ -226,7 +226,7 @@ static unsigned int mode_menu(void)
#ifdef CONFIG_VIDEO_RETAIN #ifdef CONFIG_VIDEO_RETAIN
/* Save screen content to the heap */ /* Save screen content to the heap */
struct saved_screen { static struct saved_screen {
int x, y; int x, y;
int curx, cury; int curx, cury;
u16 *data; u16 *data;
......
...@@ -77,7 +77,7 @@ CONFIG_AUDIT=y ...@@ -77,7 +77,7 @@ CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_TREE=y CONFIG_AUDIT_TREE=y
# CONFIG_IKCONFIG is not set # CONFIG_IKCONFIG is not set
CONFIG_LOG_BUF_SHIFT=17 CONFIG_LOG_BUF_SHIFT=18
CONFIG_CGROUPS=y CONFIG_CGROUPS=y
# CONFIG_CGROUP_DEBUG is not set # CONFIG_CGROUP_DEBUG is not set
CONFIG_CGROUP_NS=y CONFIG_CGROUP_NS=y
...@@ -298,7 +298,7 @@ CONFIG_KEXEC=y ...@@ -298,7 +298,7 @@ CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y CONFIG_CRASH_DUMP=y
# CONFIG_KEXEC_JUMP is not set # CONFIG_KEXEC_JUMP is not set
CONFIG_PHYSICAL_START=0x1000000 CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y # CONFIG_RELOCATABLE is not set
CONFIG_PHYSICAL_ALIGN=0x200000 CONFIG_PHYSICAL_ALIGN=0x200000
CONFIG_HOTPLUG_CPU=y CONFIG_HOTPLUG_CPU=y
# CONFIG_COMPAT_VDSO is not set # CONFIG_COMPAT_VDSO is not set
......
...@@ -77,7 +77,7 @@ CONFIG_AUDIT=y ...@@ -77,7 +77,7 @@ CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_TREE=y CONFIG_AUDIT_TREE=y
# CONFIG_IKCONFIG is not set # CONFIG_IKCONFIG is not set
CONFIG_LOG_BUF_SHIFT=17 CONFIG_LOG_BUF_SHIFT=18
CONFIG_CGROUPS=y CONFIG_CGROUPS=y
# CONFIG_CGROUP_DEBUG is not set # CONFIG_CGROUP_DEBUG is not set
CONFIG_CGROUP_NS=y CONFIG_CGROUP_NS=y
...@@ -298,7 +298,7 @@ CONFIG_SCHED_HRTICK=y ...@@ -298,7 +298,7 @@ CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y CONFIG_CRASH_DUMP=y
CONFIG_PHYSICAL_START=0x1000000 CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y # CONFIG_RELOCATABLE is not set
CONFIG_PHYSICAL_ALIGN=0x200000 CONFIG_PHYSICAL_ALIGN=0x200000
CONFIG_HOTPLUG_CPU=y CONFIG_HOTPLUG_CPU=y
# CONFIG_COMPAT_VDSO is not set # CONFIG_COMPAT_VDSO is not set
......
...@@ -32,6 +32,8 @@ ...@@ -32,6 +32,8 @@
#include <asm/proto.h> #include <asm/proto.h>
#include <asm/vdso.h> #include <asm/vdso.h>
#include <asm/sigframe.h>
#define DEBUG_SIG 0 #define DEBUG_SIG 0
#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP))) #define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
...@@ -41,7 +43,6 @@ ...@@ -41,7 +43,6 @@
X86_EFLAGS_ZF | X86_EFLAGS_AF | X86_EFLAGS_PF | \ X86_EFLAGS_ZF | X86_EFLAGS_AF | X86_EFLAGS_PF | \
X86_EFLAGS_CF) X86_EFLAGS_CF)
asmlinkage int do_signal(struct pt_regs *regs, sigset_t *oldset);
void signal_fault(struct pt_regs *regs, void __user *frame, char *where); void signal_fault(struct pt_regs *regs, void __user *frame, char *where);
int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from) int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from)
...@@ -173,47 +174,28 @@ asmlinkage long sys32_sigaltstack(const stack_ia32_t __user *uss_ptr, ...@@ -173,47 +174,28 @@ asmlinkage long sys32_sigaltstack(const stack_ia32_t __user *uss_ptr,
/* /*
* Do a signal return; undo the signal stack. * Do a signal return; undo the signal stack.
*/ */
#define COPY(x) { \
err |= __get_user(regs->x, &sc->x); \
}
struct sigframe #define COPY_SEG_CPL3(seg) { \
{ unsigned short tmp; \
u32 pretcode; err |= __get_user(tmp, &sc->seg); \
int sig; regs->seg = tmp | 3; \
struct sigcontext_ia32 sc;
struct _fpstate_ia32 fpstate_unused; /* look at kernel/sigframe.h */
unsigned int extramask[_COMPAT_NSIG_WORDS-1];
char retcode[8];
/* fp state follows here */
};
struct rt_sigframe
{
u32 pretcode;
int sig;
u32 pinfo;
u32 puc;
compat_siginfo_t info;
struct ucontext_ia32 uc;
char retcode[8];
/* fp state follows here */
};
#define COPY(x) { \
unsigned int reg; \
err |= __get_user(reg, &sc->x); \
regs->x = reg; \
} }
#define RELOAD_SEG(seg,mask) \ #define RELOAD_SEG(seg) { \
{ unsigned int cur; \ unsigned int cur, pre; \
unsigned short pre; \ err |= __get_user(pre, &sc->seg); \
err |= __get_user(pre, &sc->seg); \ savesegment(seg, cur); \
savesegment(seg, cur); \ pre |= 3; \
pre |= mask; \ if (pre != cur) \
if (pre != cur) loadsegment(seg, pre); } loadsegment(seg, pre); \
}
static int ia32_restore_sigcontext(struct pt_regs *regs, static int ia32_restore_sigcontext(struct pt_regs *regs,
struct sigcontext_ia32 __user *sc, struct sigcontext_ia32 __user *sc,
unsigned int *peax) unsigned int *pax)
{ {
unsigned int tmpflags, gs, oldgs, err = 0; unsigned int tmpflags, gs, oldgs, err = 0;
void __user *buf; void __user *buf;
...@@ -240,18 +222,16 @@ static int ia32_restore_sigcontext(struct pt_regs *regs, ...@@ -240,18 +222,16 @@ static int ia32_restore_sigcontext(struct pt_regs *regs,
if (gs != oldgs) if (gs != oldgs)
load_gs_index(gs); load_gs_index(gs);
RELOAD_SEG(fs, 3); RELOAD_SEG(fs);
RELOAD_SEG(ds, 3); RELOAD_SEG(ds);
RELOAD_SEG(es, 3); RELOAD_SEG(es);
COPY(di); COPY(si); COPY(bp); COPY(sp); COPY(bx); COPY(di); COPY(si); COPY(bp); COPY(sp); COPY(bx);
COPY(dx); COPY(cx); COPY(ip); COPY(dx); COPY(cx); COPY(ip);
/* Don't touch extended registers */ /* Don't touch extended registers */
err |= __get_user(regs->cs, &sc->cs); COPY_SEG_CPL3(cs);
regs->cs |= 3; COPY_SEG_CPL3(ss);
err |= __get_user(regs->ss, &sc->ss);
regs->ss |= 3;
err |= __get_user(tmpflags, &sc->flags); err |= __get_user(tmpflags, &sc->flags);
regs->flags = (regs->flags & ~FIX_EFLAGS) | (tmpflags & FIX_EFLAGS); regs->flags = (regs->flags & ~FIX_EFLAGS) | (tmpflags & FIX_EFLAGS);
...@@ -262,15 +242,13 @@ static int ia32_restore_sigcontext(struct pt_regs *regs, ...@@ -262,15 +242,13 @@ static int ia32_restore_sigcontext(struct pt_regs *regs,
buf = compat_ptr(tmp); buf = compat_ptr(tmp);
err |= restore_i387_xstate_ia32(buf); err |= restore_i387_xstate_ia32(buf);
err |= __get_user(tmp, &sc->ax); err |= __get_user(*pax, &sc->ax);
*peax = tmp;
return err; return err;
} }
asmlinkage long sys32_sigreturn(struct pt_regs *regs) asmlinkage long sys32_sigreturn(struct pt_regs *regs)
{ {
struct sigframe __user *frame = (struct sigframe __user *)(regs->sp-8); struct sigframe_ia32 __user *frame = (struct sigframe_ia32 __user *)(regs->sp-8);
sigset_t set; sigset_t set;
unsigned int ax; unsigned int ax;
...@@ -300,12 +278,12 @@ asmlinkage long sys32_sigreturn(struct pt_regs *regs) ...@@ -300,12 +278,12 @@ asmlinkage long sys32_sigreturn(struct pt_regs *regs)
asmlinkage long sys32_rt_sigreturn(struct pt_regs *regs) asmlinkage long sys32_rt_sigreturn(struct pt_regs *regs)
{ {
struct rt_sigframe __user *frame; struct rt_sigframe_ia32 __user *frame;
sigset_t set; sigset_t set;
unsigned int ax; unsigned int ax;
struct pt_regs tregs; struct pt_regs tregs;
frame = (struct rt_sigframe __user *)(regs->sp - 4); frame = (struct rt_sigframe_ia32 __user *)(regs->sp - 4);
if (!access_ok(VERIFY_READ, frame, sizeof(*frame))) if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
goto badframe; goto badframe;
...@@ -359,20 +337,15 @@ static int ia32_setup_sigcontext(struct sigcontext_ia32 __user *sc, ...@@ -359,20 +337,15 @@ static int ia32_setup_sigcontext(struct sigcontext_ia32 __user *sc,
err |= __put_user(regs->dx, &sc->dx); err |= __put_user(regs->dx, &sc->dx);
err |= __put_user(regs->cx, &sc->cx); err |= __put_user(regs->cx, &sc->cx);
err |= __put_user(regs->ax, &sc->ax); err |= __put_user(regs->ax, &sc->ax);
err |= __put_user(regs->cs, &sc->cs);
err |= __put_user(regs->ss, &sc->ss);
err |= __put_user(current->thread.trap_no, &sc->trapno); err |= __put_user(current->thread.trap_no, &sc->trapno);
err |= __put_user(current->thread.error_code, &sc->err); err |= __put_user(current->thread.error_code, &sc->err);
err |= __put_user(regs->ip, &sc->ip); err |= __put_user(regs->ip, &sc->ip);
err |= __put_user(regs->cs, (unsigned int __user *)&sc->cs);
err |= __put_user(regs->flags, &sc->flags); err |= __put_user(regs->flags, &sc->flags);
err |= __put_user(regs->sp, &sc->sp_at_signal); err |= __put_user(regs->sp, &sc->sp_at_signal);
err |= __put_user(regs->ss, (unsigned int __user *)&sc->ss);
tmp = save_i387_xstate_ia32(fpstate); err |= __put_user(ptr_to_compat(fpstate), &sc->fpstate);
if (tmp < 0)
err = -EFAULT;
else
err |= __put_user(ptr_to_compat(tmp ? fpstate : NULL),
&sc->fpstate);
/* non-iBCS2 extensions.. */ /* non-iBCS2 extensions.. */
err |= __put_user(mask, &sc->oldmask); err |= __put_user(mask, &sc->oldmask);
...@@ -400,7 +373,7 @@ static void __user *get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, ...@@ -400,7 +373,7 @@ static void __user *get_sigframe(struct k_sigaction *ka, struct pt_regs *regs,
} }
/* This is the legacy signal stack switching. */ /* This is the legacy signal stack switching. */
else if ((regs->ss & 0xffff) != __USER_DS && else if ((regs->ss & 0xffff) != __USER32_DS &&
!(ka->sa.sa_flags & SA_RESTORER) && !(ka->sa.sa_flags & SA_RESTORER) &&
ka->sa.sa_restorer) ka->sa.sa_restorer)
sp = (unsigned long) ka->sa.sa_restorer; sp = (unsigned long) ka->sa.sa_restorer;
...@@ -408,6 +381,8 @@ static void __user *get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, ...@@ -408,6 +381,8 @@ static void __user *get_sigframe(struct k_sigaction *ka, struct pt_regs *regs,
if (used_math()) { if (used_math()) {
sp = sp - sig_xstate_ia32_size; sp = sp - sig_xstate_ia32_size;
*fpstate = (struct _fpstate_ia32 *) sp; *fpstate = (struct _fpstate_ia32 *) sp;
if (save_i387_xstate_ia32(*fpstate) < 0)
return (void __user *) -1L;
} }
sp -= frame_size; sp -= frame_size;
...@@ -420,7 +395,7 @@ static void __user *get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, ...@@ -420,7 +395,7 @@ static void __user *get_sigframe(struct k_sigaction *ka, struct pt_regs *regs,
int ia32_setup_frame(int sig, struct k_sigaction *ka, int ia32_setup_frame(int sig, struct k_sigaction *ka,
compat_sigset_t *set, struct pt_regs *regs) compat_sigset_t *set, struct pt_regs *regs)
{ {
struct sigframe __user *frame; struct sigframe_ia32 __user *frame;
void __user *restorer; void __user *restorer;
int err = 0; int err = 0;
void __user *fpstate = NULL; void __user *fpstate = NULL;
...@@ -430,12 +405,10 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka, ...@@ -430,12 +405,10 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
u16 poplmovl; u16 poplmovl;
u32 val; u32 val;
u16 int80; u16 int80;
u16 pad;
} __attribute__((packed)) code = { } __attribute__((packed)) code = {
0xb858, /* popl %eax ; movl $...,%eax */ 0xb858, /* popl %eax ; movl $...,%eax */
__NR_ia32_sigreturn, __NR_ia32_sigreturn,
0x80cd, /* int $0x80 */ 0x80cd, /* int $0x80 */
0,
}; };
frame = get_sigframe(ka, regs, sizeof(*frame), &fpstate); frame = get_sigframe(ka, regs, sizeof(*frame), &fpstate);
...@@ -471,7 +444,7 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka, ...@@ -471,7 +444,7 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
* These are actually not used anymore, but left because some * These are actually not used anymore, but left because some
* gdb versions depend on them as a marker. * gdb versions depend on them as a marker.
*/ */
err |= __copy_to_user(frame->retcode, &code, 8); err |= __put_user(*((u64 *)&code), (u64 *)frame->retcode);
if (err) if (err)
return -EFAULT; return -EFAULT;
...@@ -501,7 +474,7 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka, ...@@ -501,7 +474,7 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
compat_sigset_t *set, struct pt_regs *regs) compat_sigset_t *set, struct pt_regs *regs)
{ {
struct rt_sigframe __user *frame; struct rt_sigframe_ia32 __user *frame;
void __user *restorer; void __user *restorer;
int err = 0; int err = 0;
void __user *fpstate = NULL; void __user *fpstate = NULL;
...@@ -511,8 +484,7 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, ...@@ -511,8 +484,7 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
u8 movl; u8 movl;
u32 val; u32 val;
u16 int80; u16 int80;
u16 pad; u8 pad;
u8 pad2;
} __attribute__((packed)) code = { } __attribute__((packed)) code = {
0xb8, 0xb8,
__NR_ia32_rt_sigreturn, __NR_ia32_rt_sigreturn,
...@@ -559,7 +531,7 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, ...@@ -559,7 +531,7 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
* Not actually used anymore, but left because some gdb * Not actually used anymore, but left because some gdb
* versions need it. * versions need it.
*/ */
err |= __copy_to_user(frame->retcode, &code, 8); err |= __put_user(*((u64 *)&code), (u64 *)frame->retcode);
if (err) if (err)
return -EFAULT; return -EFAULT;
......
...@@ -193,6 +193,7 @@ extern u8 setup_APIC_eilvt_ibs(u8 vector, u8 msg_type, u8 mask); ...@@ -193,6 +193,7 @@ extern u8 setup_APIC_eilvt_ibs(u8 vector, u8 msg_type, u8 mask);
static inline void lapic_shutdown(void) { } static inline void lapic_shutdown(void) { }
#define local_apic_timer_c2_ok 1 #define local_apic_timer_c2_ok 1
static inline void init_apic_mappings(void) { } static inline void init_apic_mappings(void) { }
static inline void disable_local_APIC(void) { }
#endif /* !CONFIG_X86_LOCAL_APIC */ #endif /* !CONFIG_X86_LOCAL_APIC */
......
...@@ -24,8 +24,6 @@ static inline cpumask_t target_cpus(void) ...@@ -24,8 +24,6 @@ static inline cpumask_t target_cpus(void)
#define INT_DELIVERY_MODE (dest_Fixed) #define INT_DELIVERY_MODE (dest_Fixed)
#define INT_DEST_MODE (0) /* phys delivery to target proc */ #define INT_DEST_MODE (0) /* phys delivery to target proc */
#define NO_BALANCE_IRQ (0) #define NO_BALANCE_IRQ (0)
#define WAKE_SECONDARY_VIA_INIT
static inline unsigned long check_apicid_used(physid_mask_t bitmap, int apicid) static inline unsigned long check_apicid_used(physid_mask_t bitmap, int apicid)
{ {
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
# define __BUG_C0 "2:\t.long 1b, %c0\n" # define __BUG_C0 "2:\t.long 1b, %c0\n"
#else #else
# define __BUG_C0 "2:\t.quad 1b, %c0\n" # define __BUG_C0 "2:\t.long 1b - 2b, %c0 - 2b\n"
#endif #endif
#define BUG() \ #define BUG() \
......
...@@ -80,7 +80,6 @@ ...@@ -80,7 +80,6 @@
#define X86_FEATURE_UP (3*32+ 9) /* smp kernel running on up */ #define X86_FEATURE_UP (3*32+ 9) /* smp kernel running on up */
#define X86_FEATURE_FXSAVE_LEAK (3*32+10) /* "" FXSAVE leaks FOP/FIP/FOP */ #define X86_FEATURE_FXSAVE_LEAK (3*32+10) /* "" FXSAVE leaks FOP/FIP/FOP */
#define X86_FEATURE_ARCH_PERFMON (3*32+11) /* Intel Architectural PerfMon */ #define X86_FEATURE_ARCH_PERFMON (3*32+11) /* Intel Architectural PerfMon */
#define X86_FEATURE_NOPL (3*32+20) /* The NOPL (0F 1F) instructions */
#define X86_FEATURE_PEBS (3*32+12) /* Precise-Event Based Sampling */ #define X86_FEATURE_PEBS (3*32+12) /* Precise-Event Based Sampling */
#define X86_FEATURE_BTS (3*32+13) /* Branch Trace Store */ #define X86_FEATURE_BTS (3*32+13) /* Branch Trace Store */
#define X86_FEATURE_SYSCALL32 (3*32+14) /* "" syscall in ia32 userspace */ #define X86_FEATURE_SYSCALL32 (3*32+14) /* "" syscall in ia32 userspace */
...@@ -92,6 +91,8 @@ ...@@ -92,6 +91,8 @@
#define X86_FEATURE_NOPL (3*32+20) /* The NOPL (0F 1F) instructions */ #define X86_FEATURE_NOPL (3*32+20) /* The NOPL (0F 1F) instructions */
#define X86_FEATURE_AMDC1E (3*32+21) /* AMD C1E detected */ #define X86_FEATURE_AMDC1E (3*32+21) /* AMD C1E detected */
#define X86_FEATURE_XTOPOLOGY (3*32+22) /* cpu topology enum extensions */ #define X86_FEATURE_XTOPOLOGY (3*32+22) /* cpu topology enum extensions */
#define X86_FEATURE_TSC_RELIABLE (3*32+23) /* TSC is known to be reliable */
#define X86_FEATURE_NONSTOP_TSC (3*32+24) /* TSC does not stop in C states */
/* Intel-defined CPU features, CPUID level 0x00000001 (ecx), word 4 */ /* Intel-defined CPU features, CPUID level 0x00000001 (ecx), word 4 */
#define X86_FEATURE_XMM3 (4*32+ 0) /* "pni" SSE-3 */ #define X86_FEATURE_XMM3 (4*32+ 0) /* "pni" SSE-3 */
...@@ -117,6 +118,7 @@ ...@@ -117,6 +118,7 @@
#define X86_FEATURE_XSAVE (4*32+26) /* XSAVE/XRSTOR/XSETBV/XGETBV */ #define X86_FEATURE_XSAVE (4*32+26) /* XSAVE/XRSTOR/XSETBV/XGETBV */
#define X86_FEATURE_OSXSAVE (4*32+27) /* "" XSAVE enabled in the OS */ #define X86_FEATURE_OSXSAVE (4*32+27) /* "" XSAVE enabled in the OS */
#define X86_FEATURE_AVX (4*32+28) /* Advanced Vector Extensions */ #define X86_FEATURE_AVX (4*32+28) /* Advanced Vector Extensions */
#define X86_FEATURE_HYPERVISOR (4*32+31) /* Running on a hypervisor */
/* VIA/Cyrix/Centaur-defined CPU features, CPUID level 0xC0000001, word 5 */ /* VIA/Cyrix/Centaur-defined CPU features, CPUID level 0xC0000001, word 5 */
#define X86_FEATURE_XSTORE (5*32+ 2) /* "rng" RNG present (xstore) */ #define X86_FEATURE_XSTORE (5*32+ 2) /* "rng" RNG present (xstore) */
...@@ -237,6 +239,7 @@ extern const char * const x86_power_flags[32]; ...@@ -237,6 +239,7 @@ extern const char * const x86_power_flags[32];
#define cpu_has_xmm4_2 boot_cpu_has(X86_FEATURE_XMM4_2) #define cpu_has_xmm4_2 boot_cpu_has(X86_FEATURE_XMM4_2)
#define cpu_has_x2apic boot_cpu_has(X86_FEATURE_X2APIC) #define cpu_has_x2apic boot_cpu_has(X86_FEATURE_X2APIC)
#define cpu_has_xsave boot_cpu_has(X86_FEATURE_XSAVE) #define cpu_has_xsave boot_cpu_has(X86_FEATURE_XSAVE)
#define cpu_has_hypervisor boot_cpu_has(X86_FEATURE_HYPERVISOR)
#if defined(CONFIG_X86_INVLPG) || defined(CONFIG_X86_64) #if defined(CONFIG_X86_INVLPG) || defined(CONFIG_X86_64)
# define cpu_has_invlpg 1 # define cpu_has_invlpg 1
......
...@@ -71,12 +71,10 @@ static inline struct dma_mapping_ops *get_dma_ops(struct device *dev) ...@@ -71,12 +71,10 @@ static inline struct dma_mapping_ops *get_dma_ops(struct device *dev)
/* Make sure we keep the same behaviour */ /* Make sure we keep the same behaviour */
static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{ {
#ifdef CONFIG_X86_64
struct dma_mapping_ops *ops = get_dma_ops(dev); struct dma_mapping_ops *ops = get_dma_ops(dev);
if (ops->mapping_error) if (ops->mapping_error)
return ops->mapping_error(dev, dma_addr); return ops->mapping_error(dev, dma_addr);
#endif
return (dma_addr == bad_dma_address); return (dma_addr == bad_dma_address);
} }
......
...@@ -6,56 +6,91 @@ ...@@ -6,56 +6,91 @@
#endif #endif
/* /*
Macros for dwarf2 CFI unwind table entries. * Macros for dwarf2 CFI unwind table entries.
See "as.info" for details on these pseudo ops. Unfortunately * See "as.info" for details on these pseudo ops. Unfortunately
they are only supported in very new binutils, so define them * they are only supported in very new binutils, so define them
away for older version. * away for older version.
*/ */
#ifdef CONFIG_AS_CFI #ifdef CONFIG_AS_CFI
#define CFI_STARTPROC .cfi_startproc #define CFI_STARTPROC .cfi_startproc
#define CFI_ENDPROC .cfi_endproc #define CFI_ENDPROC .cfi_endproc
#define CFI_DEF_CFA .cfi_def_cfa #define CFI_DEF_CFA .cfi_def_cfa
#define CFI_DEF_CFA_REGISTER .cfi_def_cfa_register #define CFI_DEF_CFA_REGISTER .cfi_def_cfa_register
#define CFI_DEF_CFA_OFFSET .cfi_def_cfa_offset #define CFI_DEF_CFA_OFFSET .cfi_def_cfa_offset
#define CFI_ADJUST_CFA_OFFSET .cfi_adjust_cfa_offset #define CFI_ADJUST_CFA_OFFSET .cfi_adjust_cfa_offset
#define CFI_OFFSET .cfi_offset #define CFI_OFFSET .cfi_offset
#define CFI_REL_OFFSET .cfi_rel_offset #define CFI_REL_OFFSET .cfi_rel_offset
#define CFI_REGISTER .cfi_register #define CFI_REGISTER .cfi_register
#define CFI_RESTORE .cfi_restore #define CFI_RESTORE .cfi_restore
#define CFI_REMEMBER_STATE .cfi_remember_state #define CFI_REMEMBER_STATE .cfi_remember_state
#define CFI_RESTORE_STATE .cfi_restore_state #define CFI_RESTORE_STATE .cfi_restore_state
#define CFI_UNDEFINED .cfi_undefined #define CFI_UNDEFINED .cfi_undefined
#ifdef CONFIG_AS_CFI_SIGNAL_FRAME #ifdef CONFIG_AS_CFI_SIGNAL_FRAME
#define CFI_SIGNAL_FRAME .cfi_signal_frame #define CFI_SIGNAL_FRAME .cfi_signal_frame
#else #else
#define CFI_SIGNAL_FRAME #define CFI_SIGNAL_FRAME
#endif #endif
#else #else
/* Due to the structure of pre-exisiting code, don't use assembler line /*
comment character # to ignore the arguments. Instead, use a dummy macro. */ * Due to the structure of pre-exisiting code, don't use assembler line
* comment character # to ignore the arguments. Instead, use a dummy macro.
*/
.macro cfi_ignore a=0, b=0, c=0, d=0 .macro cfi_ignore a=0, b=0, c=0, d=0
.endm .endm
#define CFI_STARTPROC cfi_ignore #define CFI_STARTPROC cfi_ignore
#define CFI_ENDPROC cfi_ignore #define CFI_ENDPROC cfi_ignore
#define CFI_DEF_CFA cfi_ignore #define CFI_DEF_CFA cfi_ignore
#define CFI_DEF_CFA_REGISTER cfi_ignore #define CFI_DEF_CFA_REGISTER cfi_ignore
#define CFI_DEF_CFA_OFFSET cfi_ignore #define CFI_DEF_CFA_OFFSET cfi_ignore
#define CFI_ADJUST_CFA_OFFSET cfi_ignore #define CFI_ADJUST_CFA_OFFSET cfi_ignore
#define CFI_OFFSET cfi_ignore #define CFI_OFFSET cfi_ignore
#define CFI_REL_OFFSET cfi_ignore #define CFI_REL_OFFSET cfi_ignore
#define CFI_REGISTER cfi_ignore #define CFI_REGISTER cfi_ignore
#define CFI_RESTORE cfi_ignore #define CFI_RESTORE cfi_ignore
#define CFI_REMEMBER_STATE cfi_ignore #define CFI_REMEMBER_STATE cfi_ignore
#define CFI_RESTORE_STATE cfi_ignore #define CFI_RESTORE_STATE cfi_ignore
#define CFI_UNDEFINED cfi_ignore #define CFI_UNDEFINED cfi_ignore
#define CFI_SIGNAL_FRAME cfi_ignore #define CFI_SIGNAL_FRAME cfi_ignore
#endif #endif
/*
* An attempt to make CFI annotations more or less
* correct and shorter. It is implied that you know
* what you're doing if you use them.
*/
#ifdef __ASSEMBLY__
#ifdef CONFIG_X86_64
.macro pushq_cfi reg
pushq \reg
CFI_ADJUST_CFA_OFFSET 8
.endm
.macro popq_cfi reg
popq \reg
CFI_ADJUST_CFA_OFFSET -8
.endm
.macro movq_cfi reg offset=0
movq %\reg, \offset(%rsp)
CFI_REL_OFFSET \reg, \offset
.endm
.macro movq_cfi_restore offset reg
movq \offset(%rsp), %\reg
CFI_RESTORE \reg
.endm
#else /*!CONFIG_X86_64*/
/* 32bit defenitions are missed yet */
#endif /*!CONFIG_X86_64*/
#endif /*__ASSEMBLY__*/
#endif /* _ASM_X86_DWARF2_H */ #endif /* _ASM_X86_DWARF2_H */
...@@ -8,7 +8,9 @@ enum reboot_type { ...@@ -8,7 +8,9 @@ enum reboot_type {
BOOT_BIOS = 'b', BOOT_BIOS = 'b',
#endif #endif
BOOT_ACPI = 'a', BOOT_ACPI = 'a',
BOOT_EFI = 'e' BOOT_EFI = 'e',
BOOT_CF9 = 'p',
BOOT_CF9_COND = 'q',
}; };
extern enum reboot_type reboot_type; extern enum reboot_type reboot_type;
......
...@@ -9,31 +9,27 @@ static inline int apic_id_registered(void) ...@@ -9,31 +9,27 @@ static inline int apic_id_registered(void)
return (1); return (1);
} }
static inline cpumask_t target_cpus(void) static inline cpumask_t target_cpus_cluster(void)
{ {
#if defined CONFIG_ES7000_CLUSTERED_APIC
return CPU_MASK_ALL; return CPU_MASK_ALL;
#else }
static inline cpumask_t target_cpus(void)
{
return cpumask_of_cpu(smp_processor_id()); return cpumask_of_cpu(smp_processor_id());
#endif
} }
#if defined CONFIG_ES7000_CLUSTERED_APIC #define APIC_DFR_VALUE_CLUSTER (APIC_DFR_CLUSTER)
#define APIC_DFR_VALUE (APIC_DFR_CLUSTER) #define INT_DELIVERY_MODE_CLUSTER (dest_LowestPrio)
#define INT_DELIVERY_MODE (dest_LowestPrio) #define INT_DEST_MODE_CLUSTER (1) /* logical delivery broadcast to all procs */
#define INT_DEST_MODE (1) /* logical delivery broadcast to all procs */ #define NO_BALANCE_IRQ_CLUSTER (1)
#define NO_BALANCE_IRQ (1)
#undef WAKE_SECONDARY_VIA_INIT
#define WAKE_SECONDARY_VIA_MIP
#else
#define APIC_DFR_VALUE (APIC_DFR_FLAT) #define APIC_DFR_VALUE (APIC_DFR_FLAT)
#define INT_DELIVERY_MODE (dest_Fixed) #define INT_DELIVERY_MODE (dest_Fixed)
#define INT_DEST_MODE (0) /* phys delivery to target procs */ #define INT_DEST_MODE (0) /* phys delivery to target procs */
#define NO_BALANCE_IRQ (0) #define NO_BALANCE_IRQ (0)
#undef APIC_DEST_LOGICAL #undef APIC_DEST_LOGICAL
#define APIC_DEST_LOGICAL 0x0 #define APIC_DEST_LOGICAL 0x0
#define WAKE_SECONDARY_VIA_INIT
#endif
static inline unsigned long check_apicid_used(physid_mask_t bitmap, int apicid) static inline unsigned long check_apicid_used(physid_mask_t bitmap, int apicid)
{ {
...@@ -60,6 +56,16 @@ static inline unsigned long calculate_ldr(int cpu) ...@@ -60,6 +56,16 @@ static inline unsigned long calculate_ldr(int cpu)
* an APIC. See e.g. "AP-388 82489DX User's Manual" (Intel * an APIC. See e.g. "AP-388 82489DX User's Manual" (Intel
* document number 292116). So here it goes... * document number 292116). So here it goes...
*/ */
static inline void init_apic_ldr_cluster(void)
{
unsigned long val;
int cpu = smp_processor_id();
apic_write(APIC_DFR, APIC_DFR_VALUE_CLUSTER);
val = calculate_ldr(cpu);
apic_write(APIC_LDR, val);
}
static inline void init_apic_ldr(void) static inline void init_apic_ldr(void)
{ {
unsigned long val; unsigned long val;
...@@ -70,10 +76,6 @@ static inline void init_apic_ldr(void) ...@@ -70,10 +76,6 @@ static inline void init_apic_ldr(void)
apic_write(APIC_LDR, val); apic_write(APIC_LDR, val);
} }
#ifndef CONFIG_X86_GENERICARCH
extern void enable_apic_mode(void);
#endif
extern int apic_version [MAX_APICS]; extern int apic_version [MAX_APICS];
static inline void setup_apic_routing(void) static inline void setup_apic_routing(void)
{ {
...@@ -144,7 +146,7 @@ static inline int check_phys_apicid_present(int cpu_physical_apicid) ...@@ -144,7 +146,7 @@ static inline int check_phys_apicid_present(int cpu_physical_apicid)
return (1); return (1);
} }
static inline unsigned int cpu_mask_to_apicid(cpumask_t cpumask) static inline unsigned int cpu_mask_to_apicid_cluster(cpumask_t cpumask)
{ {
int num_bits_set; int num_bits_set;
int cpus_found = 0; int cpus_found = 0;
...@@ -154,11 +156,7 @@ static inline unsigned int cpu_mask_to_apicid(cpumask_t cpumask) ...@@ -154,11 +156,7 @@ static inline unsigned int cpu_mask_to_apicid(cpumask_t cpumask)
num_bits_set = cpus_weight(cpumask); num_bits_set = cpus_weight(cpumask);
/* Return id to all */ /* Return id to all */
if (num_bits_set == NR_CPUS) if (num_bits_set == NR_CPUS)
#if defined CONFIG_ES7000_CLUSTERED_APIC
return 0xFF; return 0xFF;
#else
return cpu_to_logical_apicid(0);
#endif
/* /*
* The cpus in the mask must all be on the apic cluster. If are not * The cpus in the mask must all be on the apic cluster. If are not
* on the same apicid cluster return default value of TARGET_CPUS. * on the same apicid cluster return default value of TARGET_CPUS.
...@@ -171,11 +169,40 @@ static inline unsigned int cpu_mask_to_apicid(cpumask_t cpumask) ...@@ -171,11 +169,40 @@ static inline unsigned int cpu_mask_to_apicid(cpumask_t cpumask)
if (apicid_cluster(apicid) != if (apicid_cluster(apicid) !=
apicid_cluster(new_apicid)){ apicid_cluster(new_apicid)){
printk ("%s: Not a valid mask!\n", __func__); printk ("%s: Not a valid mask!\n", __func__);
#if defined CONFIG_ES7000_CLUSTERED_APIC
return 0xFF; return 0xFF;
#else }
apicid = new_apicid;
cpus_found++;
}
cpu++;
}
return apicid;
}
static inline unsigned int cpu_mask_to_apicid(cpumask_t cpumask)
{
int num_bits_set;
int cpus_found = 0;
int cpu;
int apicid;
num_bits_set = cpus_weight(cpumask);
/* Return id to all */
if (num_bits_set == NR_CPUS)
return cpu_to_logical_apicid(0);
/*
* The cpus in the mask must all be on the apic cluster. If are not
* on the same apicid cluster return default value of TARGET_CPUS.
*/
cpu = first_cpu(cpumask);
apicid = cpu_to_logical_apicid(cpu);
while (cpus_found < num_bits_set) {
if (cpu_isset(cpu, cpumask)) {
int new_apicid = cpu_to_logical_apicid(cpu);
if (apicid_cluster(apicid) !=
apicid_cluster(new_apicid)){
printk ("%s: Not a valid mask!\n", __func__);
return cpu_to_logical_apicid(0); return cpu_to_logical_apicid(0);
#endif
} }
apicid = new_apicid; apicid = new_apicid;
cpus_found++; cpus_found++;
......
#ifndef __ASM_ES7000_WAKECPU_H #ifndef __ASM_ES7000_WAKECPU_H
#define __ASM_ES7000_WAKECPU_H #define __ASM_ES7000_WAKECPU_H
/* #define TRAMPOLINE_PHYS_LOW 0x467
* This file copes with machines that wakeup secondary CPUs by the #define TRAMPOLINE_PHYS_HIGH 0x469
* INIT, INIT, STARTUP sequence.
*/
#ifdef CONFIG_ES7000_CLUSTERED_APIC
#define WAKE_SECONDARY_VIA_MIP
#else
#define WAKE_SECONDARY_VIA_INIT
#endif
#ifdef WAKE_SECONDARY_VIA_MIP
extern int es7000_start_cpu(int cpu, unsigned long eip);
static inline int
wakeup_secondary_cpu(int phys_apicid, unsigned long start_eip)
{
int boot_error = 0;
boot_error = es7000_start_cpu(phys_apicid, start_eip);
return boot_error;
}
#endif
#define TRAMPOLINE_LOW phys_to_virt(0x467)
#define TRAMPOLINE_HIGH phys_to_virt(0x469)
#define boot_cpu_apicid boot_cpu_physical_apicid
static inline void wait_for_init_deassert(atomic_t *deassert) static inline void wait_for_init_deassert(atomic_t *deassert)
{ {
#ifdef WAKE_SECONDARY_VIA_INIT #ifndef CONFIG_ES7000_CLUSTERED_APIC
while (!atomic_read(deassert)) while (!atomic_read(deassert))
cpu_relax(); cpu_relax();
#endif #endif
...@@ -50,9 +26,12 @@ static inline void restore_NMI_vector(unsigned short *high, unsigned short *low) ...@@ -50,9 +26,12 @@ static inline void restore_NMI_vector(unsigned short *high, unsigned short *low)
{ {
} }
#define inquire_remote_apic(apicid) do { \ extern void __inquire_remote_apic(int apicid);
if (apic_verbosity >= APIC_DEBUG) \
__inquire_remote_apic(apicid); \ static inline void inquire_remote_apic(int apicid)
} while (0) {
if (apic_verbosity >= APIC_DEBUG)
__inquire_remote_apic(apicid);
}
#endif /* __ASM_MACH_WAKECPU_H */ #endif /* __ASM_MACH_WAKECPU_H */
...@@ -29,6 +29,39 @@ extern int fix_aperture; ...@@ -29,6 +29,39 @@ extern int fix_aperture;
#define AMD64_GARTCACHECTL 0x9c #define AMD64_GARTCACHECTL 0x9c
#define AMD64_GARTEN (1<<0) #define AMD64_GARTEN (1<<0)
#ifdef CONFIG_GART_IOMMU
extern int gart_iommu_aperture;
extern int gart_iommu_aperture_allowed;
extern int gart_iommu_aperture_disabled;
extern void early_gart_iommu_check(void);
extern void gart_iommu_init(void);
extern void gart_iommu_shutdown(void);
extern void __init gart_parse_options(char *);
extern void gart_iommu_hole_init(void);
#else
#define gart_iommu_aperture 0
#define gart_iommu_aperture_allowed 0
#define gart_iommu_aperture_disabled 1
static inline void early_gart_iommu_check(void)
{
}
static inline void gart_iommu_init(void)
{
}
static inline void gart_iommu_shutdown(void)
{
}
static inline void gart_parse_options(char *options)
{
}
static inline void gart_iommu_hole_init(void)
{
}
#endif
extern int agp_amd64_init(void); extern int agp_amd64_init(void);
static inline void enable_gart_translation(struct pci_dev *dev, u64 addr) static inline void enable_gart_translation(struct pci_dev *dev, u64 addr)
......
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
#define _ASM_X86_GENAPIC_32_H #define _ASM_X86_GENAPIC_32_H
#include <asm/mpspec.h> #include <asm/mpspec.h>
#include <asm/atomic.h>
/* /*
* Generic APIC driver interface. * Generic APIC driver interface.
...@@ -65,6 +66,14 @@ struct genapic { ...@@ -65,6 +66,14 @@ struct genapic {
void (*send_IPI_allbutself)(int vector); void (*send_IPI_allbutself)(int vector);
void (*send_IPI_all)(int vector); void (*send_IPI_all)(int vector);
#endif #endif
int (*wakeup_cpu)(int apicid, unsigned long start_eip);
int trampoline_phys_low;
int trampoline_phys_high;
void (*wait_for_init_deassert)(atomic_t *deassert);
void (*smp_callin_clear_local_apic)(void);
void (*store_NMI_vector)(unsigned short *high, unsigned short *low);
void (*restore_NMI_vector)(unsigned short *high, unsigned short *low);
void (*inquire_remote_apic)(int apicid);
}; };
#define APICFUNC(x) .x = x, #define APICFUNC(x) .x = x,
...@@ -105,16 +114,24 @@ struct genapic { ...@@ -105,16 +114,24 @@ struct genapic {
APICFUNC(get_apic_id) \ APICFUNC(get_apic_id) \
.apic_id_mask = APIC_ID_MASK, \ .apic_id_mask = APIC_ID_MASK, \
APICFUNC(cpu_mask_to_apicid) \ APICFUNC(cpu_mask_to_apicid) \
APICFUNC(vector_allocation_domain) \ APICFUNC(vector_allocation_domain) \
APICFUNC(acpi_madt_oem_check) \ APICFUNC(acpi_madt_oem_check) \
IPIFUNC(send_IPI_mask) \ IPIFUNC(send_IPI_mask) \
IPIFUNC(send_IPI_allbutself) \ IPIFUNC(send_IPI_allbutself) \
IPIFUNC(send_IPI_all) \ IPIFUNC(send_IPI_all) \
APICFUNC(enable_apic_mode) \ APICFUNC(enable_apic_mode) \
APICFUNC(phys_pkg_id) \ APICFUNC(phys_pkg_id) \
.trampoline_phys_low = TRAMPOLINE_PHYS_LOW, \
.trampoline_phys_high = TRAMPOLINE_PHYS_HIGH, \
APICFUNC(wait_for_init_deassert) \
APICFUNC(smp_callin_clear_local_apic) \
APICFUNC(store_NMI_vector) \
APICFUNC(restore_NMI_vector) \
APICFUNC(inquire_remote_apic) \
} }
extern struct genapic *genapic; extern struct genapic *genapic;
extern void es7000_update_genapic_to_cluster(void);
enum uv_system_type {UV_NONE, UV_LEGACY_APIC, UV_X2APIC, UV_NON_UNIQUE_APIC}; enum uv_system_type {UV_NONE, UV_LEGACY_APIC, UV_X2APIC, UV_NON_UNIQUE_APIC};
#define get_uv_system_type() UV_NONE #define get_uv_system_type() UV_NONE
......
...@@ -32,6 +32,8 @@ struct genapic { ...@@ -32,6 +32,8 @@ struct genapic {
unsigned int (*get_apic_id)(unsigned long x); unsigned int (*get_apic_id)(unsigned long x);
unsigned long (*set_apic_id)(unsigned int id); unsigned long (*set_apic_id)(unsigned int id);
unsigned long apic_id_mask; unsigned long apic_id_mask;
/* wakeup_secondary_cpu */
int (*wakeup_cpu)(int apicid, unsigned long start_eip);
}; };
extern struct genapic *genapic; extern struct genapic *genapic;
......
...@@ -22,6 +22,8 @@ DECLARE_PER_CPU(irq_cpustat_t, irq_stat); ...@@ -22,6 +22,8 @@ DECLARE_PER_CPU(irq_cpustat_t, irq_stat);
#define __ARCH_IRQ_STAT #define __ARCH_IRQ_STAT
#define __IRQ_STAT(cpu, member) (per_cpu(irq_stat, cpu).member) #define __IRQ_STAT(cpu, member) (per_cpu(irq_stat, cpu).member)
#define inc_irq_stat(member) (__get_cpu_var(irq_stat).member++)
void ack_bad_irq(unsigned int irq); void ack_bad_irq(unsigned int irq);
#include <linux/irq_cpustat.h> #include <linux/irq_cpustat.h>
......
...@@ -11,6 +11,8 @@ ...@@ -11,6 +11,8 @@
#define __ARCH_IRQ_STAT 1 #define __ARCH_IRQ_STAT 1
#define inc_irq_stat(member) add_pda(member, 1)
#define local_softirq_pending() read_pda(__softirq_pending) #define local_softirq_pending() read_pda(__softirq_pending)
#define __ARCH_SET_SOFTIRQ_PENDING 1 #define __ARCH_SET_SOFTIRQ_PENDING 1
......
...@@ -109,9 +109,7 @@ extern asmlinkage void smp_invalidate_interrupt(struct pt_regs *); ...@@ -109,9 +109,7 @@ extern asmlinkage void smp_invalidate_interrupt(struct pt_regs *);
#endif #endif
#endif #endif
#ifdef CONFIG_X86_32 extern void (*__initconst interrupt[NR_VECTORS-FIRST_EXTERNAL_VECTOR])(void);
extern void (*const interrupt[NR_VECTORS])(void);
#endif
typedef int vector_irq_t[NR_VECTORS]; typedef int vector_irq_t[NR_VECTORS];
DECLARE_PER_CPU(vector_irq_t, vector_irq); DECLARE_PER_CPU(vector_irq_t, vector_irq);
......
/*
* Copyright (C) 2008, VMware, Inc.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for more
* details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
*/
#ifndef ASM_X86__HYPERVISOR_H
#define ASM_X86__HYPERVISOR_H
extern unsigned long get_hypervisor_tsc_freq(void);
extern void init_hypervisor(struct cpuinfo_x86 *c);
#endif
...@@ -129,24 +129,6 @@ typedef struct compat_siginfo { ...@@ -129,24 +129,6 @@ typedef struct compat_siginfo {
} _sifields; } _sifields;
} compat_siginfo_t; } compat_siginfo_t;
struct sigframe32 {
u32 pretcode;
int sig;
struct sigcontext_ia32 sc;
struct _fpstate_ia32 fpstate;
unsigned int extramask[_COMPAT_NSIG_WORDS-1];
};
struct rt_sigframe32 {
u32 pretcode;
int sig;
u32 pinfo;
u32 puc;
compat_siginfo_t info;
struct ucontext_ia32 uc;
struct _fpstate_ia32 fpstate;
};
struct ustat32 { struct ustat32 {
__u32 f_tfree; __u32 f_tfree;
compat_ino_t f_tinode; compat_ino_t f_tinode;
......
...@@ -8,8 +8,13 @@ struct notifier_block; ...@@ -8,8 +8,13 @@ struct notifier_block;
void idle_notifier_register(struct notifier_block *n); void idle_notifier_register(struct notifier_block *n);
void idle_notifier_unregister(struct notifier_block *n); void idle_notifier_unregister(struct notifier_block *n);
#ifdef CONFIG_X86_64
void enter_idle(void); void enter_idle(void);
void exit_idle(void); void exit_idle(void);
#else /* !CONFIG_X86_64 */
static inline void enter_idle(void) { }
static inline void exit_idle(void) { }
#endif /* CONFIG_X86_64 */
void c1e_remove_cpu(int cpu); void c1e_remove_cpu(int cpu);
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
#define ARCH_HAS_IOREMAP_WC #define ARCH_HAS_IOREMAP_WC
#include <linux/compiler.h> #include <linux/compiler.h>
#include <asm-generic/int-ll64.h>
#define build_mmio_read(name, size, type, reg, barrier) \ #define build_mmio_read(name, size, type, reg, barrier) \
static inline type name(const volatile void __iomem *addr) \ static inline type name(const volatile void __iomem *addr) \
...@@ -45,21 +46,39 @@ build_mmio_write(__writel, "l", unsigned int, "r", ) ...@@ -45,21 +46,39 @@ build_mmio_write(__writel, "l", unsigned int, "r", )
#define mmiowb() barrier() #define mmiowb() barrier()
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
build_mmio_read(readq, "q", unsigned long, "=r", :"memory") build_mmio_read(readq, "q", unsigned long, "=r", :"memory")
build_mmio_read(__readq, "q", unsigned long, "=r", )
build_mmio_write(writeq, "q", unsigned long, "r", :"memory") build_mmio_write(writeq, "q", unsigned long, "r", :"memory")
build_mmio_write(__writeq, "q", unsigned long, "r", )
#define readq_relaxed(a) __readq(a) #else
#define __raw_readq __readq
#define __raw_writeq writeq static inline __u64 readq(const volatile void __iomem *addr)
{
const volatile u32 __iomem *p = addr;
u32 low, high;
low = readl(p);
high = readl(p + 1);
return low + ((u64)high << 32);
}
static inline void writeq(__u64 val, volatile void __iomem *addr)
{
writel(val, addr);
writel(val >> 32, addr+4);
}
/* Let people know we have them */
#define readq readq
#define writeq writeq
#endif #endif
extern int iommu_bio_merge; #define readq_relaxed(a) readq(a)
#define __raw_readq(a) readq(a)
#define __raw_writeq(val, addr) writeq(val, addr)
/* Let people know that we have them */
#define readq readq
#define writeq writeq
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
# include "io_32.h" # include "io_32.h"
......
...@@ -232,8 +232,6 @@ void memset_io(volatile void __iomem *a, int b, size_t c); ...@@ -232,8 +232,6 @@ void memset_io(volatile void __iomem *a, int b, size_t c);
#define flush_write_buffers() #define flush_write_buffers()
#define BIO_VMERGE_BOUNDARY iommu_bio_merge
/* /*
* Convert a virtual cached pointer to an uncached pointer * Convert a virtual cached pointer to an uncached pointer
*/ */
......
...@@ -156,11 +156,21 @@ extern int sis_apic_bug; ...@@ -156,11 +156,21 @@ extern int sis_apic_bug;
/* 1 if "noapic" boot option passed */ /* 1 if "noapic" boot option passed */
extern int skip_ioapic_setup; extern int skip_ioapic_setup;
/* 1 if "noapic" boot option passed */
extern int noioapicquirk;
/* -1 if "noapic" boot option passed */
extern int noioapicreroute;
/* 1 if the timer IRQ uses the '8259A Virtual Wire' mode */ /* 1 if the timer IRQ uses the '8259A Virtual Wire' mode */
extern int timer_through_8259; extern int timer_through_8259;
static inline void disable_ioapic_setup(void) static inline void disable_ioapic_setup(void)
{ {
#ifdef CONFIG_PCI
noioapicquirk = 1;
noioapicreroute = -1;
#endif
skip_ioapic_setup = 1; skip_ioapic_setup = 1;
} }
......
...@@ -12,37 +12,4 @@ extern unsigned long iommu_nr_pages(unsigned long addr, unsigned long len); ...@@ -12,37 +12,4 @@ extern unsigned long iommu_nr_pages(unsigned long addr, unsigned long len);
/* 10 seconds */ /* 10 seconds */
#define DMAR_OPERATION_TIMEOUT ((cycles_t) tsc_khz*10*1000) #define DMAR_OPERATION_TIMEOUT ((cycles_t) tsc_khz*10*1000)
#ifdef CONFIG_GART_IOMMU
extern int gart_iommu_aperture;
extern int gart_iommu_aperture_allowed;
extern int gart_iommu_aperture_disabled;
extern void early_gart_iommu_check(void);
extern void gart_iommu_init(void);
extern void gart_iommu_shutdown(void);
extern void __init gart_parse_options(char *);
extern void gart_iommu_hole_init(void);
#else
#define gart_iommu_aperture 0
#define gart_iommu_aperture_allowed 0
#define gart_iommu_aperture_disabled 1
static inline void early_gart_iommu_check(void)
{
}
static inline void gart_iommu_init(void)
{
}
static inline void gart_iommu_shutdown(void)
{
}
static inline void gart_parse_options(char *options)
{
}
static inline void gart_iommu_hole_init(void)
{
}
#endif
#endif /* _ASM_X86_IOMMU_H */ #endif /* _ASM_X86_IOMMU_H */
...@@ -5,21 +5,8 @@ ...@@ -5,21 +5,8 @@
# define PA_CONTROL_PAGE 0 # define PA_CONTROL_PAGE 0
# define VA_CONTROL_PAGE 1 # define VA_CONTROL_PAGE 1
# define PA_PGD 2 # define PA_PGD 2
# define VA_PGD 3 # define PA_SWAP_PAGE 3
# define PA_PTE_0 4 # define PAGES_NR 4
# define VA_PTE_0 5
# define PA_PTE_1 6
# define VA_PTE_1 7
# define PA_SWAP_PAGE 8
# ifdef CONFIG_X86_PAE
# define PA_PMD_0 9
# define VA_PMD_0 10
# define PA_PMD_1 11
# define VA_PMD_1 12
# define PAGES_NR 13
# else
# define PAGES_NR 9
# endif
#else #else
# define PA_CONTROL_PAGE 0 # define PA_CONTROL_PAGE 0
# define VA_CONTROL_PAGE 1 # define VA_CONTROL_PAGE 1
...@@ -170,6 +157,20 @@ relocate_kernel(unsigned long indirection_page, ...@@ -170,6 +157,20 @@ relocate_kernel(unsigned long indirection_page,
unsigned long start_address) ATTRIB_NORET; unsigned long start_address) ATTRIB_NORET;
#endif #endif
#ifdef CONFIG_X86_32
#define ARCH_HAS_KIMAGE_ARCH
struct kimage_arch {
pgd_t *pgd;
#ifdef CONFIG_X86_PAE
pmd_t *pmd0;
pmd_t *pmd1;
#endif
pte_t *pte0;
pte_t *pte1;
};
#endif
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* _ASM_X86_KEXEC_H */ #endif /* _ASM_X86_KEXEC_H */
...@@ -57,5 +57,65 @@ ...@@ -57,5 +57,65 @@
#define __ALIGN_STR ".align 16,0x90" #define __ALIGN_STR ".align 16,0x90"
#endif #endif
/*
* to check ENTRY_X86/END_X86 and
* KPROBE_ENTRY_X86/KPROBE_END_X86
* unbalanced-missed-mixed appearance
*/
#define __set_entry_x86 .set ENTRY_X86_IN, 0
#define __unset_entry_x86 .set ENTRY_X86_IN, 1
#define __set_kprobe_x86 .set KPROBE_X86_IN, 0
#define __unset_kprobe_x86 .set KPROBE_X86_IN, 1
#define __macro_err_x86 .error "ENTRY_X86/KPROBE_X86 unbalanced,missed,mixed"
#define __check_entry_x86 \
.ifdef ENTRY_X86_IN; \
.ifeq ENTRY_X86_IN; \
__macro_err_x86; \
.abort; \
.endif; \
.endif
#define __check_kprobe_x86 \
.ifdef KPROBE_X86_IN; \
.ifeq KPROBE_X86_IN; \
__macro_err_x86; \
.abort; \
.endif; \
.endif
#define __check_entry_kprobe_x86 \
__check_entry_x86; \
__check_kprobe_x86
#define ENTRY_KPROBE_FINAL_X86 __check_entry_kprobe_x86
#define ENTRY_X86(name) \
__check_entry_kprobe_x86; \
__set_entry_x86; \
.globl name; \
__ALIGN; \
name:
#define END_X86(name) \
__unset_entry_x86; \
__check_entry_kprobe_x86; \
.size name, .-name
#define KPROBE_ENTRY_X86(name) \
__check_entry_kprobe_x86; \
__set_kprobe_x86; \
.pushsection .kprobes.text, "ax"; \
.globl name; \
__ALIGN; \
name:
#define KPROBE_END_X86(name) \
__unset_kprobe_x86; \
__check_entry_kprobe_x86; \
.size name, .-name; \
.popsection
#endif /* _ASM_X86_LINKAGE_H */ #endif /* _ASM_X86_LINKAGE_H */
...@@ -32,11 +32,13 @@ static inline cpumask_t target_cpus(void) ...@@ -32,11 +32,13 @@ static inline cpumask_t target_cpus(void)
#define vector_allocation_domain (genapic->vector_allocation_domain) #define vector_allocation_domain (genapic->vector_allocation_domain)
#define read_apic_id() (GET_APIC_ID(apic_read(APIC_ID))) #define read_apic_id() (GET_APIC_ID(apic_read(APIC_ID)))
#define send_IPI_self (genapic->send_IPI_self) #define send_IPI_self (genapic->send_IPI_self)
#define wakeup_secondary_cpu (genapic->wakeup_cpu)
extern void setup_apic_routing(void); extern void setup_apic_routing(void);
#else #else
#define INT_DELIVERY_MODE dest_LowestPrio #define INT_DELIVERY_MODE dest_LowestPrio
#define INT_DEST_MODE 1 /* logical delivery broadcast to all procs */ #define INT_DEST_MODE 1 /* logical delivery broadcast to all procs */
#define TARGET_CPUS (target_cpus()) #define TARGET_CPUS (target_cpus())
#define wakeup_secondary_cpu wakeup_secondary_cpu_via_init
/* /*
* Set up the logical destination ID. * Set up the logical destination ID.
* *
......
#ifndef _ASM_X86_MACH_DEFAULT_MACH_WAKECPU_H #ifndef _ASM_X86_MACH_DEFAULT_MACH_WAKECPU_H
#define _ASM_X86_MACH_DEFAULT_MACH_WAKECPU_H #define _ASM_X86_MACH_DEFAULT_MACH_WAKECPU_H
/* #define TRAMPOLINE_PHYS_LOW (0x467)
* This file copes with machines that wakeup secondary CPUs by the #define TRAMPOLINE_PHYS_HIGH (0x469)
* INIT, INIT, STARTUP sequence.
*/
#define WAKE_SECONDARY_VIA_INIT
#define TRAMPOLINE_LOW phys_to_virt(0x467)
#define TRAMPOLINE_HIGH phys_to_virt(0x469)
#define boot_cpu_apicid boot_cpu_physical_apicid
static inline void wait_for_init_deassert(atomic_t *deassert) static inline void wait_for_init_deassert(atomic_t *deassert)
{ {
...@@ -33,9 +24,12 @@ static inline void restore_NMI_vector(unsigned short *high, unsigned short *low) ...@@ -33,9 +24,12 @@ static inline void restore_NMI_vector(unsigned short *high, unsigned short *low)
{ {
} }
#define inquire_remote_apic(apicid) do { \ extern void __inquire_remote_apic(int apicid);
if (apic_verbosity >= APIC_DEBUG) \
__inquire_remote_apic(apicid); \ static inline void inquire_remote_apic(int apicid)
} while (0) {
if (apic_verbosity >= APIC_DEBUG)
__inquire_remote_apic(apicid);
}
#endif /* _ASM_X86_MACH_DEFAULT_MACH_WAKECPU_H */ #endif /* _ASM_X86_MACH_DEFAULT_MACH_WAKECPU_H */
...@@ -13,9 +13,11 @@ static inline void smpboot_setup_warm_reset_vector(unsigned long start_eip) ...@@ -13,9 +13,11 @@ static inline void smpboot_setup_warm_reset_vector(unsigned long start_eip)
CMOS_WRITE(0xa, 0xf); CMOS_WRITE(0xa, 0xf);
local_flush_tlb(); local_flush_tlb();
pr_debug("1.\n"); pr_debug("1.\n");
*((volatile unsigned short *) TRAMPOLINE_HIGH) = start_eip >> 4; *((volatile unsigned short *)phys_to_virt(TRAMPOLINE_PHYS_HIGH)) =
start_eip >> 4;
pr_debug("2.\n"); pr_debug("2.\n");
*((volatile unsigned short *) TRAMPOLINE_LOW) = start_eip & 0xf; *((volatile unsigned short *)phys_to_virt(TRAMPOLINE_PHYS_LOW)) =
start_eip & 0xf;
pr_debug("3.\n"); pr_debug("3.\n");
} }
...@@ -32,7 +34,7 @@ static inline void smpboot_restore_warm_reset_vector(void) ...@@ -32,7 +34,7 @@ static inline void smpboot_restore_warm_reset_vector(void)
*/ */
CMOS_WRITE(0, 0xf); CMOS_WRITE(0, 0xf);
*((volatile long *) phys_to_virt(0x467)) = 0; *((volatile long *)phys_to_virt(TRAMPOLINE_PHYS_LOW)) = 0;
} }
static inline void __init smpboot_setup_io_apic(void) static inline void __init smpboot_setup_io_apic(void)
......
...@@ -27,6 +27,7 @@ ...@@ -27,6 +27,7 @@
#define vector_allocation_domain (genapic->vector_allocation_domain) #define vector_allocation_domain (genapic->vector_allocation_domain)
#define enable_apic_mode (genapic->enable_apic_mode) #define enable_apic_mode (genapic->enable_apic_mode)
#define phys_pkg_id (genapic->phys_pkg_id) #define phys_pkg_id (genapic->phys_pkg_id)
#define wakeup_secondary_cpu (genapic->wakeup_cpu)
extern void generic_bigsmp_probe(void); extern void generic_bigsmp_probe(void);
......
#ifndef _ASM_X86_MACH_GENERIC_MACH_WAKECPU_H
#define _ASM_X86_MACH_GENERIC_MACH_WAKECPU_H
#define TRAMPOLINE_PHYS_LOW (genapic->trampoline_phys_low)
#define TRAMPOLINE_PHYS_HIGH (genapic->trampoline_phys_high)
#define wait_for_init_deassert (genapic->wait_for_init_deassert)
#define smp_callin_clear_local_apic (genapic->smp_callin_clear_local_apic)
#define store_NMI_vector (genapic->store_NMI_vector)
#define restore_NMI_vector (genapic->restore_NMI_vector)
#define inquire_remote_apic (genapic->inquire_remote_apic)
#endif /* _ASM_X86_MACH_GENERIC_MACH_APIC_H */
...@@ -4,9 +4,8 @@ ...@@ -4,9 +4,8 @@
static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
{ {
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
unsigned cpu = smp_processor_id(); if (x86_read_percpu(cpu_tlbstate.state) == TLBSTATE_OK)
if (per_cpu(cpu_tlbstate, cpu).state == TLBSTATE_OK) x86_write_percpu(cpu_tlbstate.state, TLBSTATE_LAZY);
per_cpu(cpu_tlbstate, cpu).state = TLBSTATE_LAZY;
#endif #endif
} }
...@@ -20,8 +19,8 @@ static inline void switch_mm(struct mm_struct *prev, ...@@ -20,8 +19,8 @@ static inline void switch_mm(struct mm_struct *prev,
/* stop flush ipis for the previous mm */ /* stop flush ipis for the previous mm */
cpu_clear(cpu, prev->cpu_vm_mask); cpu_clear(cpu, prev->cpu_vm_mask);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
per_cpu(cpu_tlbstate, cpu).state = TLBSTATE_OK; x86_write_percpu(cpu_tlbstate.state, TLBSTATE_OK);
per_cpu(cpu_tlbstate, cpu).active_mm = next; x86_write_percpu(cpu_tlbstate.active_mm, next);
#endif #endif
cpu_set(cpu, next->cpu_vm_mask); cpu_set(cpu, next->cpu_vm_mask);
...@@ -36,8 +35,8 @@ static inline void switch_mm(struct mm_struct *prev, ...@@ -36,8 +35,8 @@ static inline void switch_mm(struct mm_struct *prev,
} }
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
else { else {
per_cpu(cpu_tlbstate, cpu).state = TLBSTATE_OK; x86_write_percpu(cpu_tlbstate.state, TLBSTATE_OK);
BUG_ON(per_cpu(cpu_tlbstate, cpu).active_mm != next); BUG_ON(x86_read_percpu(cpu_tlbstate.active_mm) != next);
if (!cpu_test_and_set(cpu, next->cpu_vm_mask)) { if (!cpu_test_and_set(cpu, next->cpu_vm_mask)) {
/* We were in lazy tlb mode and leave_mm disabled /* We were in lazy tlb mode and leave_mm disabled
......
...@@ -85,7 +85,9 @@ ...@@ -85,7 +85,9 @@
/* AMD64 MSRs. Not complete. See the architecture manual for a more /* AMD64 MSRs. Not complete. See the architecture manual for a more
complete list. */ complete list. */
#define MSR_AMD64_PATCH_LEVEL 0x0000008b
#define MSR_AMD64_NB_CFG 0xc001001f #define MSR_AMD64_NB_CFG 0xc001001f
#define MSR_AMD64_PATCH_LOADER 0xc0010020
#define MSR_AMD64_IBSFETCHCTL 0xc0011030 #define MSR_AMD64_IBSFETCHCTL 0xc0011030
#define MSR_AMD64_IBSFETCHLINAD 0xc0011031 #define MSR_AMD64_IBSFETCHLINAD 0xc0011031
#define MSR_AMD64_IBSFETCHPHYSAD 0xc0011032 #define MSR_AMD64_IBSFETCHPHYSAD 0xc0011032
......
...@@ -22,10 +22,10 @@ static inline unsigned long long native_read_tscp(unsigned int *aux) ...@@ -22,10 +22,10 @@ static inline unsigned long long native_read_tscp(unsigned int *aux)
} }
/* /*
* i386 calling convention returns 64-bit value in edx:eax, while * both i386 and x86_64 returns 64-bit value in edx:eax, but gcc's "A"
* x86_64 returns at rax. Also, the "A" constraint does not really * constraint has different meanings. For i386, "A" means exactly
* mean rdx:rax in x86_64, so we need specialized behaviour for each * edx:eax, while for x86_64 it doesn't mean rdx:rax or edx:eax. Instead,
* architecture * it means rax *or* rdx.
*/ */
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
#define DECLARE_ARGS(val, low, high) unsigned low, high #define DECLARE_ARGS(val, low, high) unsigned low, high
...@@ -181,10 +181,10 @@ static inline int rdmsrl_amd_safe(unsigned msr, unsigned long long *p) ...@@ -181,10 +181,10 @@ static inline int rdmsrl_amd_safe(unsigned msr, unsigned long long *p)
} }
#define rdtscl(low) \ #define rdtscl(low) \
((low) = (u32)native_read_tsc()) ((low) = (u32)__native_read_tsc())
#define rdtscll(val) \ #define rdtscll(val) \
((val) = native_read_tsc()) ((val) = __native_read_tsc())
#define rdpmc(counter, low, high) \ #define rdpmc(counter, low, high) \
do { \ do { \
......
...@@ -3,12 +3,8 @@ ...@@ -3,12 +3,8 @@
/* This file copes with machines that wakeup secondary CPUs by NMIs */ /* This file copes with machines that wakeup secondary CPUs by NMIs */
#define WAKE_SECONDARY_VIA_NMI #define TRAMPOLINE_PHYS_LOW (0x8)
#define TRAMPOLINE_PHYS_HIGH (0xa)
#define TRAMPOLINE_LOW phys_to_virt(0x8)
#define TRAMPOLINE_HIGH phys_to_virt(0xa)
#define boot_cpu_apicid boot_cpu_logical_apicid
/* We don't do anything here because we use NMI's to boot instead */ /* We don't do anything here because we use NMI's to boot instead */
static inline void wait_for_init_deassert(atomic_t *deassert) static inline void wait_for_init_deassert(atomic_t *deassert)
...@@ -27,17 +23,23 @@ static inline void smp_callin_clear_local_apic(void) ...@@ -27,17 +23,23 @@ static inline void smp_callin_clear_local_apic(void)
static inline void store_NMI_vector(unsigned short *high, unsigned short *low) static inline void store_NMI_vector(unsigned short *high, unsigned short *low)
{ {
printk("Storing NMI vector\n"); printk("Storing NMI vector\n");
*high = *((volatile unsigned short *) TRAMPOLINE_HIGH); *high =
*low = *((volatile unsigned short *) TRAMPOLINE_LOW); *((volatile unsigned short *)phys_to_virt(TRAMPOLINE_PHYS_HIGH));
*low =
*((volatile unsigned short *)phys_to_virt(TRAMPOLINE_PHYS_LOW));
} }
static inline void restore_NMI_vector(unsigned short *high, unsigned short *low) static inline void restore_NMI_vector(unsigned short *high, unsigned short *low)
{ {
printk("Restoring NMI vector\n"); printk("Restoring NMI vector\n");
*((volatile unsigned short *) TRAMPOLINE_HIGH) = *high; *((volatile unsigned short *)phys_to_virt(TRAMPOLINE_PHYS_HIGH)) =
*((volatile unsigned short *) TRAMPOLINE_LOW) = *low; *high;
*((volatile unsigned short *)phys_to_virt(TRAMPOLINE_PHYS_LOW)) =
*low;
} }
#define inquire_remote_apic(apicid) {} static inline void inquire_remote_apic(int apicid)
{
}
#endif /* __ASM_NUMAQ_WAKECPU_H */ #endif /* __ASM_NUMAQ_WAKECPU_H */
...@@ -19,6 +19,8 @@ struct pci_sysdata { ...@@ -19,6 +19,8 @@ struct pci_sysdata {
}; };
extern int pci_routeirq; extern int pci_routeirq;
extern int noioapicquirk;
extern int noioapicreroute;
/* scan a bus after allocating a pci_sysdata for it */ /* scan a bus after allocating a pci_sysdata for it */
extern struct pci_bus *pci_scan_bus_on_node(int busno, struct pci_ops *ops, extern struct pci_bus *pci_scan_bus_on_node(int busno, struct pci_ops *ops,
......
...@@ -56,23 +56,55 @@ static inline pte_t native_ptep_get_and_clear(pte_t *xp) ...@@ -56,23 +56,55 @@ static inline pte_t native_ptep_get_and_clear(pte_t *xp)
#define pte_none(x) (!(x).pte_low) #define pte_none(x) (!(x).pte_low)
/* /*
* Bits 0, 6 and 7 are taken, split up the 29 bits of offset * Bits _PAGE_BIT_PRESENT, _PAGE_BIT_FILE and _PAGE_BIT_PROTNONE are taken,
* into this range: * split up the 29 bits of offset into this range:
*/ */
#define PTE_FILE_MAX_BITS 29 #define PTE_FILE_MAX_BITS 29
#define PTE_FILE_SHIFT1 (_PAGE_BIT_PRESENT + 1)
#if _PAGE_BIT_FILE < _PAGE_BIT_PROTNONE
#define PTE_FILE_SHIFT2 (_PAGE_BIT_FILE + 1)
#define PTE_FILE_SHIFT3 (_PAGE_BIT_PROTNONE + 1)
#else
#define PTE_FILE_SHIFT2 (_PAGE_BIT_PROTNONE + 1)
#define PTE_FILE_SHIFT3 (_PAGE_BIT_FILE + 1)
#endif
#define PTE_FILE_BITS1 (PTE_FILE_SHIFT2 - PTE_FILE_SHIFT1 - 1)
#define PTE_FILE_BITS2 (PTE_FILE_SHIFT3 - PTE_FILE_SHIFT2 - 1)
#define pte_to_pgoff(pte) \ #define pte_to_pgoff(pte) \
((((pte).pte_low >> 1) & 0x1f) + (((pte).pte_low >> 8) << 5)) ((((pte).pte_low >> PTE_FILE_SHIFT1) \
& ((1U << PTE_FILE_BITS1) - 1)) \
+ ((((pte).pte_low >> PTE_FILE_SHIFT2) \
& ((1U << PTE_FILE_BITS2) - 1)) << PTE_FILE_BITS1) \
+ (((pte).pte_low >> PTE_FILE_SHIFT3) \
<< (PTE_FILE_BITS1 + PTE_FILE_BITS2)))
#define pgoff_to_pte(off) \ #define pgoff_to_pte(off) \
((pte_t) { .pte_low = (((off) & 0x1f) << 1) + \ ((pte_t) { .pte_low = \
(((off) >> 5) << 8) + _PAGE_FILE }) (((off) & ((1U << PTE_FILE_BITS1) - 1)) << PTE_FILE_SHIFT1) \
+ ((((off) >> PTE_FILE_BITS1) & ((1U << PTE_FILE_BITS2) - 1)) \
<< PTE_FILE_SHIFT2) \
+ (((off) >> (PTE_FILE_BITS1 + PTE_FILE_BITS2)) \
<< PTE_FILE_SHIFT3) \
+ _PAGE_FILE })
/* Encode and de-code a swap entry */ /* Encode and de-code a swap entry */
#define __swp_type(x) (((x).val >> 1) & 0x1f) #if _PAGE_BIT_FILE < _PAGE_BIT_PROTNONE
#define __swp_offset(x) ((x).val >> 8) #define SWP_TYPE_BITS (_PAGE_BIT_FILE - _PAGE_BIT_PRESENT - 1)
#define __swp_entry(type, offset) \ #define SWP_OFFSET_SHIFT (_PAGE_BIT_PROTNONE + 1)
((swp_entry_t) { ((type) << 1) | ((offset) << 8) }) #else
#define SWP_TYPE_BITS (_PAGE_BIT_PROTNONE - _PAGE_BIT_PRESENT - 1)
#define SWP_OFFSET_SHIFT (_PAGE_BIT_FILE + 1)
#endif
#define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS)
#define __swp_type(x) (((x).val >> (_PAGE_BIT_PRESENT + 1)) \
& ((1U << SWP_TYPE_BITS) - 1))
#define __swp_offset(x) ((x).val >> SWP_OFFSET_SHIFT)
#define __swp_entry(type, offset) ((swp_entry_t) { \
((type) << (_PAGE_BIT_PRESENT + 1)) \
| ((offset) << SWP_OFFSET_SHIFT) })
#define __pte_to_swp_entry(pte) ((swp_entry_t) { (pte).pte_low }) #define __pte_to_swp_entry(pte) ((swp_entry_t) { (pte).pte_low })
#define __swp_entry_to_pte(x) ((pte_t) { .pte = (x).val }) #define __swp_entry_to_pte(x) ((pte_t) { .pte = (x).val })
......
...@@ -166,6 +166,7 @@ static inline int pte_none(pte_t pte) ...@@ -166,6 +166,7 @@ static inline int pte_none(pte_t pte)
#define PTE_FILE_MAX_BITS 32 #define PTE_FILE_MAX_BITS 32
/* Encode and de-code a swap entry */ /* Encode and de-code a swap entry */
#define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > 5)
#define __swp_type(x) (((x).val) & 0x1f) #define __swp_type(x) (((x).val) & 0x1f)
#define __swp_offset(x) ((x).val >> 5) #define __swp_offset(x) ((x).val >> 5)
#define __swp_entry(type, offset) ((swp_entry_t){(type) | (offset) << 5}) #define __swp_entry(type, offset) ((swp_entry_t){(type) | (offset) << 5})
......
...@@ -10,7 +10,6 @@ ...@@ -10,7 +10,6 @@
#define _PAGE_BIT_PCD 4 /* page cache disabled */ #define _PAGE_BIT_PCD 4 /* page cache disabled */
#define _PAGE_BIT_ACCESSED 5 /* was accessed (raised by CPU) */ #define _PAGE_BIT_ACCESSED 5 /* was accessed (raised by CPU) */
#define _PAGE_BIT_DIRTY 6 /* was written to (raised by CPU) */ #define _PAGE_BIT_DIRTY 6 /* was written to (raised by CPU) */
#define _PAGE_BIT_FILE 6
#define _PAGE_BIT_PSE 7 /* 4 MB (or 2MB) page */ #define _PAGE_BIT_PSE 7 /* 4 MB (or 2MB) page */
#define _PAGE_BIT_PAT 7 /* on 4KB pages */ #define _PAGE_BIT_PAT 7 /* on 4KB pages */
#define _PAGE_BIT_GLOBAL 8 /* Global TLB entry PPro+ */ #define _PAGE_BIT_GLOBAL 8 /* Global TLB entry PPro+ */
...@@ -22,6 +21,12 @@ ...@@ -22,6 +21,12 @@
#define _PAGE_BIT_CPA_TEST _PAGE_BIT_UNUSED1 #define _PAGE_BIT_CPA_TEST _PAGE_BIT_UNUSED1
#define _PAGE_BIT_NX 63 /* No execute: only valid after cpuid check */ #define _PAGE_BIT_NX 63 /* No execute: only valid after cpuid check */
/* If _PAGE_BIT_PRESENT is clear, we use these: */
/* - if the user mapped it with PROT_NONE; pte_present gives true */
#define _PAGE_BIT_PROTNONE _PAGE_BIT_GLOBAL
/* - set: nonlinear file mapping, saved PTE; unset:swap */
#define _PAGE_BIT_FILE _PAGE_BIT_DIRTY
#define _PAGE_PRESENT (_AT(pteval_t, 1) << _PAGE_BIT_PRESENT) #define _PAGE_PRESENT (_AT(pteval_t, 1) << _PAGE_BIT_PRESENT)
#define _PAGE_RW (_AT(pteval_t, 1) << _PAGE_BIT_RW) #define _PAGE_RW (_AT(pteval_t, 1) << _PAGE_BIT_RW)
#define _PAGE_USER (_AT(pteval_t, 1) << _PAGE_BIT_USER) #define _PAGE_USER (_AT(pteval_t, 1) << _PAGE_BIT_USER)
...@@ -46,11 +51,8 @@ ...@@ -46,11 +51,8 @@
#define _PAGE_NX (_AT(pteval_t, 0)) #define _PAGE_NX (_AT(pteval_t, 0))
#endif #endif
/* If _PAGE_PRESENT is clear, we use these: */ #define _PAGE_FILE (_AT(pteval_t, 1) << _PAGE_BIT_FILE)
#define _PAGE_FILE _PAGE_DIRTY /* nonlinear file mapping, #define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE)
* saved PTE; unset:swap */
#define _PAGE_PROTNONE _PAGE_PSE /* if the user mapped it with PROT_NONE;
pte_present gives true */
#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ #define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \
_PAGE_ACCESSED | _PAGE_DIRTY) _PAGE_ACCESSED | _PAGE_DIRTY)
...@@ -158,8 +160,19 @@ ...@@ -158,8 +160,19 @@
#define PGD_IDENT_ATTR 0x001 /* PRESENT (no other attributes) */ #define PGD_IDENT_ATTR 0x001 /* PRESENT (no other attributes) */
#endif #endif
/*
* Macro to mark a page protection value as UC-
*/
#define pgprot_noncached(prot) \
((boot_cpu_data.x86 > 3) \
? (__pgprot(pgprot_val(prot) | _PAGE_CACHE_UC_MINUS)) \
: (prot))
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#define pgprot_writecombine pgprot_writecombine
extern pgprot_t pgprot_writecombine(pgprot_t prot);
/* /*
* ZERO_PAGE is a global shared page that is always zero: used * ZERO_PAGE is a global shared page that is always zero: used
* for zero-mapped memory areas etc.. * for zero-mapped memory areas etc..
...@@ -329,6 +342,9 @@ static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot) ...@@ -329,6 +342,9 @@ static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)
#define canon_pgprot(p) __pgprot(pgprot_val(p) & __supported_pte_mask) #define canon_pgprot(p) __pgprot(pgprot_val(p) & __supported_pte_mask)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
/* Indicate that x86 has its own track and untrack pfn vma functions */
#define __HAVE_PFNMAP_TRACKING
#define __HAVE_PHYS_MEM_ACCESS_PROT #define __HAVE_PHYS_MEM_ACCESS_PROT
struct file; struct file;
pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
......
...@@ -100,15 +100,6 @@ extern unsigned long pg0[]; ...@@ -100,15 +100,6 @@ extern unsigned long pg0[];
# include <asm/pgtable-2level.h> # include <asm/pgtable-2level.h>
#endif #endif
/*
* Macro to mark a page protection value as "uncacheable".
* On processors which do not support it, this is a no-op.
*/
#define pgprot_noncached(prot) \
((boot_cpu_data.x86 > 3) \
? (__pgprot(pgprot_val(prot) | _PAGE_PCD | _PAGE_PWT)) \
: (prot))
/* /*
* Conversion functions: convert a page and protection to a page entry, * Conversion functions: convert a page and protection to a page entry,
* and a page entry and page directory to the page they refer to. * and a page entry and page directory to the page they refer to.
......
...@@ -146,7 +146,7 @@ static inline void native_pgd_clear(pgd_t *pgd) ...@@ -146,7 +146,7 @@ static inline void native_pgd_clear(pgd_t *pgd)
#define PGDIR_MASK (~(PGDIR_SIZE - 1)) #define PGDIR_MASK (~(PGDIR_SIZE - 1))
#define MAXMEM _AC(0x00003fffffffffff, UL) #define MAXMEM _AC(__AC(1, UL) << MAX_PHYSMEM_BITS, UL)
#define VMALLOC_START _AC(0xffffc20000000000, UL) #define VMALLOC_START _AC(0xffffc20000000000, UL)
#define VMALLOC_END _AC(0xffffe1ffffffffff, UL) #define VMALLOC_END _AC(0xffffe1ffffffffff, UL)
#define VMEMMAP_START _AC(0xffffe20000000000, UL) #define VMEMMAP_START _AC(0xffffe20000000000, UL)
...@@ -176,12 +176,6 @@ static inline int pmd_bad(pmd_t pmd) ...@@ -176,12 +176,6 @@ static inline int pmd_bad(pmd_t pmd)
#define pages_to_mb(x) ((x) >> (20 - PAGE_SHIFT)) /* FIXME: is this right? */ #define pages_to_mb(x) ((x) >> (20 - PAGE_SHIFT)) /* FIXME: is this right? */
/*
* Macro to mark a page protection value as "uncacheable".
*/
#define pgprot_noncached(prot) \
(__pgprot(pgprot_val((prot)) | _PAGE_PCD | _PAGE_PWT))
/* /*
* Conversion functions: convert a page and protection to a page entry, * Conversion functions: convert a page and protection to a page entry,
* and a page entry and page directory to the page they refer to. * and a page entry and page directory to the page they refer to.
...@@ -250,10 +244,22 @@ static inline int pud_large(pud_t pte) ...@@ -250,10 +244,22 @@ static inline int pud_large(pud_t pte)
extern int direct_gbpages; extern int direct_gbpages;
/* Encode and de-code a swap entry */ /* Encode and de-code a swap entry */
#define __swp_type(x) (((x).val >> 1) & 0x3f) #if _PAGE_BIT_FILE < _PAGE_BIT_PROTNONE
#define __swp_offset(x) ((x).val >> 8) #define SWP_TYPE_BITS (_PAGE_BIT_FILE - _PAGE_BIT_PRESENT - 1)
#define __swp_entry(type, offset) ((swp_entry_t) { ((type) << 1) | \ #define SWP_OFFSET_SHIFT (_PAGE_BIT_PROTNONE + 1)
((offset) << 8) }) #else
#define SWP_TYPE_BITS (_PAGE_BIT_PROTNONE - _PAGE_BIT_PRESENT - 1)
#define SWP_OFFSET_SHIFT (_PAGE_BIT_FILE + 1)
#endif
#define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS)
#define __swp_type(x) (((x).val >> (_PAGE_BIT_PRESENT + 1)) \
& ((1U << SWP_TYPE_BITS) - 1))
#define __swp_offset(x) ((x).val >> SWP_OFFSET_SHIFT)
#define __swp_entry(type, offset) ((swp_entry_t) { \
((type) << (_PAGE_BIT_PRESENT + 1)) \
| ((offset) << SWP_OFFSET_SHIFT) })
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val((pte)) }) #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val((pte)) })
#define __swp_entry_to_pte(x) ((pte_t) { .pte = (x).val }) #define __swp_entry_to_pte(x) ((pte_t) { .pte = (x).val })
......
...@@ -110,6 +110,7 @@ struct cpuinfo_x86 { ...@@ -110,6 +110,7 @@ struct cpuinfo_x86 {
/* Index into per_cpu list: */ /* Index into per_cpu list: */
u16 cpu_index; u16 cpu_index;
#endif #endif
unsigned int x86_hyper_vendor;
} __attribute__((__aligned__(SMP_CACHE_BYTES))); } __attribute__((__aligned__(SMP_CACHE_BYTES)));
#define X86_VENDOR_INTEL 0 #define X86_VENDOR_INTEL 0
...@@ -123,6 +124,9 @@ struct cpuinfo_x86 { ...@@ -123,6 +124,9 @@ struct cpuinfo_x86 {
#define X86_VENDOR_UNKNOWN 0xff #define X86_VENDOR_UNKNOWN 0xff
#define X86_HYPER_VENDOR_NONE 0
#define X86_HYPER_VENDOR_VMWARE 1
/* /*
* capabilities of CPUs * capabilities of CPUs
*/ */
......
#ifndef _ASM_X86_REBOOT_H #ifndef _ASM_X86_REBOOT_H
#define _ASM_X86_REBOOT_H #define _ASM_X86_REBOOT_H
#include <linux/kdebug.h>
struct pt_regs; struct pt_regs;
struct machine_ops { struct machine_ops {
...@@ -18,4 +20,7 @@ void native_machine_crash_shutdown(struct pt_regs *regs); ...@@ -18,4 +20,7 @@ void native_machine_crash_shutdown(struct pt_regs *regs);
void native_machine_shutdown(void); void native_machine_shutdown(void);
void machine_real_restart(const unsigned char *code, int length); void machine_real_restart(const unsigned char *code, int length);
typedef void (*nmi_shootdown_cb)(int, struct die_args*);
void nmi_shootdown_cpus(nmi_shootdown_cb callback);
#endif /* _ASM_X86_REBOOT_H */ #endif /* _ASM_X86_REBOOT_H */
...@@ -8,6 +8,10 @@ ...@@ -8,6 +8,10 @@
/* Interrupt control for vSMPowered x86_64 systems */ /* Interrupt control for vSMPowered x86_64 systems */
void vsmp_init(void); void vsmp_init(void);
void setup_bios_corruption_check(void);
#ifdef CONFIG_X86_VISWS #ifdef CONFIG_X86_VISWS
extern void visws_early_detect(void); extern void visws_early_detect(void);
extern int is_visws_box(void); extern int is_visws_box(void);
...@@ -16,6 +20,8 @@ static inline void visws_early_detect(void) { } ...@@ -16,6 +20,8 @@ static inline void visws_early_detect(void) { }
static inline int is_visws_box(void) { return 0; } static inline int is_visws_box(void) { return 0; }
#endif #endif
extern int wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip);
extern int wakeup_secondary_cpu_via_init(int apicid, unsigned long start_eip);
/* /*
* Any setup quirks to be performed? * Any setup quirks to be performed?
*/ */
...@@ -39,6 +45,7 @@ struct x86_quirks { ...@@ -39,6 +45,7 @@ struct x86_quirks {
void (*smp_read_mpc_oem)(struct mp_config_oemtable *oemtable, void (*smp_read_mpc_oem)(struct mp_config_oemtable *oemtable,
unsigned short oemsize); unsigned short oemsize);
int (*setup_ioapic_ids)(void); int (*setup_ioapic_ids)(void);
int (*update_genapic)(void);
}; };
extern struct x86_quirks *x86_quirks; extern struct x86_quirks *x86_quirks;
......
#ifndef _ASM_X86_SIGFRAME_H
#define _ASM_X86_SIGFRAME_H
#include <asm/sigcontext.h>
#include <asm/siginfo.h>
#include <asm/ucontext.h>
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
struct sigframe { #define sigframe_ia32 sigframe
char __user *pretcode; #define rt_sigframe_ia32 rt_sigframe
#define sigcontext_ia32 sigcontext
#define _fpstate_ia32 _fpstate
#define ucontext_ia32 ucontext
#else /* !CONFIG_X86_32 */
#ifdef CONFIG_IA32_EMULATION
#include <asm/ia32.h>
#endif /* CONFIG_IA32_EMULATION */
#endif /* CONFIG_X86_32 */
#if defined(CONFIG_X86_32) || defined(CONFIG_IA32_EMULATION)
struct sigframe_ia32 {
u32 pretcode;
int sig; int sig;
struct sigcontext sc; struct sigcontext_ia32 sc;
/* /*
* fpstate is unused. fpstate is moved/allocated after * fpstate is unused. fpstate is moved/allocated after
* retcode[] below. This movement allows to have the FP state and the * retcode[] below. This movement allows to have the FP state and the
...@@ -11,32 +32,39 @@ struct sigframe { ...@@ -11,32 +32,39 @@ struct sigframe {
* the offset of extramask[] in the sigframe and thus prevent any * the offset of extramask[] in the sigframe and thus prevent any
* legacy application accessing/modifying it. * legacy application accessing/modifying it.
*/ */
struct _fpstate fpstate_unused; struct _fpstate_ia32 fpstate_unused;
#ifdef CONFIG_IA32_EMULATION
unsigned int extramask[_COMPAT_NSIG_WORDS-1];
#else /* !CONFIG_IA32_EMULATION */
unsigned long extramask[_NSIG_WORDS-1]; unsigned long extramask[_NSIG_WORDS-1];
#endif /* CONFIG_IA32_EMULATION */
char retcode[8]; char retcode[8];
/* fp state follows here */ /* fp state follows here */
}; };
struct rt_sigframe { struct rt_sigframe_ia32 {
char __user *pretcode; u32 pretcode;
int sig; int sig;
struct siginfo __user *pinfo; u32 pinfo;
void __user *puc; u32 puc;
#ifdef CONFIG_IA32_EMULATION
compat_siginfo_t info;
#else /* !CONFIG_IA32_EMULATION */
struct siginfo info; struct siginfo info;
struct ucontext uc; #endif /* CONFIG_IA32_EMULATION */
struct ucontext_ia32 uc;
char retcode[8]; char retcode[8];
/* fp state follows here */ /* fp state follows here */
}; };
#else #endif /* defined(CONFIG_X86_32) || defined(CONFIG_IA32_EMULATION) */
#ifdef CONFIG_X86_64
struct rt_sigframe { struct rt_sigframe {
char __user *pretcode; char __user *pretcode;
struct ucontext uc; struct ucontext uc;
struct siginfo info; struct siginfo info;
/* fp state follows here */ /* fp state follows here */
}; };
#endif /* CONFIG_X86_64 */
int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, #endif /* _ASM_X86_SIGFRAME_H */
sigset_t *set, struct pt_regs *regs);
int ia32_setup_frame(int sig, struct k_sigaction *ka,
sigset_t *set, struct pt_regs *regs);
#endif
...@@ -27,7 +27,7 @@ ...@@ -27,7 +27,7 @@
#else /* CONFIG_X86_32 */ #else /* CONFIG_X86_32 */
# define SECTION_SIZE_BITS 27 /* matt - 128 is convenient right now */ # define SECTION_SIZE_BITS 27 /* matt - 128 is convenient right now */
# define MAX_PHYSADDR_BITS 44 # define MAX_PHYSADDR_BITS 44
# define MAX_PHYSMEM_BITS 44 # define MAX_PHYSMEM_BITS 44 /* Can be max 45 bits */
#endif #endif
#endif /* CONFIG_SPARSEMEM */ #endif /* CONFIG_SPARSEMEM */
......
...@@ -40,7 +40,7 @@ asmlinkage int sys_sigaction(int, const struct old_sigaction __user *, ...@@ -40,7 +40,7 @@ asmlinkage int sys_sigaction(int, const struct old_sigaction __user *,
struct old_sigaction __user *); struct old_sigaction __user *);
asmlinkage int sys_sigaltstack(unsigned long); asmlinkage int sys_sigaltstack(unsigned long);
asmlinkage unsigned long sys_sigreturn(unsigned long); asmlinkage unsigned long sys_sigreturn(unsigned long);
asmlinkage int sys_rt_sigreturn(unsigned long); asmlinkage int sys_rt_sigreturn(struct pt_regs);
/* kernel/ioport.c */ /* kernel/ioport.c */
asmlinkage long sys_iopl(unsigned long); asmlinkage long sys_iopl(unsigned long);
......
...@@ -314,6 +314,8 @@ extern void free_init_pages(char *what, unsigned long begin, unsigned long end); ...@@ -314,6 +314,8 @@ extern void free_init_pages(char *what, unsigned long begin, unsigned long end);
void default_idle(void); void default_idle(void);
void stop_this_cpu(void *dummy);
/* /*
* Force strict CPU ordering. * Force strict CPU ordering.
* And yes, this is required on UP too when we're talking * And yes, this is required on UP too when we're talking
......
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#ifdef CONFIG_X86_TRAMPOLINE
/* /*
* Trampoline 80x86 program as an array. * Trampoline 80x86 program as an array.
*/ */
...@@ -13,8 +14,14 @@ extern unsigned char *trampoline_base; ...@@ -13,8 +14,14 @@ extern unsigned char *trampoline_base;
extern unsigned long init_rsp; extern unsigned long init_rsp;
extern unsigned long initial_code; extern unsigned long initial_code;
#define TRAMPOLINE_SIZE roundup(trampoline_end - trampoline_data, PAGE_SIZE)
#define TRAMPOLINE_BASE 0x6000 #define TRAMPOLINE_BASE 0x6000
extern unsigned long setup_trampoline(void); extern unsigned long setup_trampoline(void);
extern void __init reserve_trampoline_memory(void);
#else
static inline void reserve_trampoline_memory(void) {};
#endif /* CONFIG_X86_TRAMPOLINE */
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
......
...@@ -350,14 +350,14 @@ do { \ ...@@ -350,14 +350,14 @@ do { \
#define __put_user_nocheck(x, ptr, size) \ #define __put_user_nocheck(x, ptr, size) \
({ \ ({ \
long __pu_err; \ int __pu_err; \
__put_user_size((x), (ptr), (size), __pu_err, -EFAULT); \ __put_user_size((x), (ptr), (size), __pu_err, -EFAULT); \
__pu_err; \ __pu_err; \
}) })
#define __get_user_nocheck(x, ptr, size) \ #define __get_user_nocheck(x, ptr, size) \
({ \ ({ \
long __gu_err; \ int __gu_err; \
unsigned long __gu_val; \ unsigned long __gu_val; \
__get_user_size(__gu_val, (ptr), (size), __gu_err, -EFAULT); \ __get_user_size(__gu_val, (ptr), (size), __gu_err, -EFAULT); \
(x) = (__force __typeof__(*(ptr)))__gu_val; \ (x) = (__force __typeof__(*(ptr)))__gu_val; \
......
...@@ -32,13 +32,18 @@ ...@@ -32,13 +32,18 @@
enum uv_bios_cmd { enum uv_bios_cmd {
UV_BIOS_COMMON, UV_BIOS_COMMON,
UV_BIOS_GET_SN_INFO, UV_BIOS_GET_SN_INFO,
UV_BIOS_FREQ_BASE UV_BIOS_FREQ_BASE,
UV_BIOS_WATCHLIST_ALLOC,
UV_BIOS_WATCHLIST_FREE,
UV_BIOS_MEMPROTECT,
UV_BIOS_GET_PARTITION_ADDR
}; };
/* /*
* Status values returned from a BIOS call. * Status values returned from a BIOS call.
*/ */
enum { enum {
BIOS_STATUS_MORE_PASSES = 1,
BIOS_STATUS_SUCCESS = 0, BIOS_STATUS_SUCCESS = 0,
BIOS_STATUS_UNIMPLEMENTED = -ENOSYS, BIOS_STATUS_UNIMPLEMENTED = -ENOSYS,
BIOS_STATUS_EINVAL = -EINVAL, BIOS_STATUS_EINVAL = -EINVAL,
...@@ -71,6 +76,21 @@ union partition_info_u { ...@@ -71,6 +76,21 @@ union partition_info_u {
}; };
}; };
union uv_watchlist_u {
u64 val;
struct {
u64 blade : 16,
size : 32,
filler : 16;
};
};
enum uv_memprotect {
UV_MEMPROT_RESTRICT_ACCESS,
UV_MEMPROT_ALLOW_AMO,
UV_MEMPROT_ALLOW_RW
};
/* /*
* bios calls have 6 parameters * bios calls have 6 parameters
*/ */
...@@ -80,14 +100,20 @@ extern s64 uv_bios_call_reentrant(enum uv_bios_cmd, u64, u64, u64, u64, u64); ...@@ -80,14 +100,20 @@ extern s64 uv_bios_call_reentrant(enum uv_bios_cmd, u64, u64, u64, u64, u64);
extern s64 uv_bios_get_sn_info(int, int *, long *, long *, long *); extern s64 uv_bios_get_sn_info(int, int *, long *, long *, long *);
extern s64 uv_bios_freq_base(u64, u64 *); extern s64 uv_bios_freq_base(u64, u64 *);
extern int uv_bios_mq_watchlist_alloc(int, unsigned long, unsigned int,
unsigned long *);
extern int uv_bios_mq_watchlist_free(int, int);
extern s64 uv_bios_change_memprotect(u64, u64, enum uv_memprotect);
extern s64 uv_bios_reserved_page_pa(u64, u64 *, u64 *, u64 *);
extern void uv_bios_init(void); extern void uv_bios_init(void);
extern unsigned long sn_rtc_cycles_per_second;
extern int uv_type; extern int uv_type;
extern long sn_partition_id; extern long sn_partition_id;
extern long uv_coherency_id; extern long sn_coherency_id;
extern long uv_region_size; extern long sn_region_size;
#define partition_coherence_id() (uv_coherency_id) #define partition_coherence_id() (sn_coherency_id)
extern struct kobject *sgi_uv_kobj; /* /sys/firmware/sgi_uv */ extern struct kobject *sgi_uv_kobj; /* /sys/firmware/sgi_uv */
......
...@@ -113,25 +113,37 @@ ...@@ -113,25 +113,37 @@
*/ */
#define UV_MAX_NASID_VALUE (UV_MAX_NUMALINK_NODES * 2) #define UV_MAX_NASID_VALUE (UV_MAX_NUMALINK_NODES * 2)
struct uv_scir_s {
struct timer_list timer;
unsigned long offset;
unsigned long last;
unsigned long idle_on;
unsigned long idle_off;
unsigned char state;
unsigned char enabled;
};
/* /*
* The following defines attributes of the HUB chip. These attributes are * The following defines attributes of the HUB chip. These attributes are
* frequently referenced and are kept in the per-cpu data areas of each cpu. * frequently referenced and are kept in the per-cpu data areas of each cpu.
* They are kept together in a struct to minimize cache misses. * They are kept together in a struct to minimize cache misses.
*/ */
struct uv_hub_info_s { struct uv_hub_info_s {
unsigned long global_mmr_base; unsigned long global_mmr_base;
unsigned long gpa_mask; unsigned long gpa_mask;
unsigned long gnode_upper; unsigned long gnode_upper;
unsigned long lowmem_remap_top; unsigned long lowmem_remap_top;
unsigned long lowmem_remap_base; unsigned long lowmem_remap_base;
unsigned short pnode; unsigned short pnode;
unsigned short pnode_mask; unsigned short pnode_mask;
unsigned short coherency_domain_number; unsigned short coherency_domain_number;
unsigned short numa_blade_id; unsigned short numa_blade_id;
unsigned char blade_processor_id; unsigned char blade_processor_id;
unsigned char m_val; unsigned char m_val;
unsigned char n_val; unsigned char n_val;
struct uv_scir_s scir;
}; };
DECLARE_PER_CPU(struct uv_hub_info_s, __uv_hub_info); DECLARE_PER_CPU(struct uv_hub_info_s, __uv_hub_info);
#define uv_hub_info (&__get_cpu_var(__uv_hub_info)) #define uv_hub_info (&__get_cpu_var(__uv_hub_info))
#define uv_cpu_hub_info(cpu) (&per_cpu(__uv_hub_info, cpu)) #define uv_cpu_hub_info(cpu) (&per_cpu(__uv_hub_info, cpu))
...@@ -163,6 +175,30 @@ DECLARE_PER_CPU(struct uv_hub_info_s, __uv_hub_info); ...@@ -163,6 +175,30 @@ DECLARE_PER_CPU(struct uv_hub_info_s, __uv_hub_info);
#define UV_APIC_PNODE_SHIFT 6 #define UV_APIC_PNODE_SHIFT 6
/* Local Bus from cpu's perspective */
#define LOCAL_BUS_BASE 0x1c00000
#define LOCAL_BUS_SIZE (4 * 1024 * 1024)
/*
* System Controller Interface Reg
*
* Note there are NO leds on a UV system. This register is only
* used by the system controller to monitor system-wide operation.
* There are 64 regs per node. With Nahelem cpus (2 cores per node,
* 8 cpus per core, 2 threads per cpu) there are 32 cpu threads on
* a node.
*
* The window is located at top of ACPI MMR space
*/
#define SCIR_WINDOW_COUNT 64
#define SCIR_LOCAL_MMR_BASE (LOCAL_BUS_BASE + \
LOCAL_BUS_SIZE - \
SCIR_WINDOW_COUNT)
#define SCIR_CPU_HEARTBEAT 0x01 /* timer interrupt */
#define SCIR_CPU_ACTIVITY 0x02 /* not idle */
#define SCIR_CPU_HB_INTERVAL (HZ) /* once per second */
/* /*
* Macros for converting between kernel virtual addresses, socket local physical * Macros for converting between kernel virtual addresses, socket local physical
* addresses, and UV global physical addresses. * addresses, and UV global physical addresses.
...@@ -174,7 +210,7 @@ DECLARE_PER_CPU(struct uv_hub_info_s, __uv_hub_info); ...@@ -174,7 +210,7 @@ DECLARE_PER_CPU(struct uv_hub_info_s, __uv_hub_info);
static inline unsigned long uv_soc_phys_ram_to_gpa(unsigned long paddr) static inline unsigned long uv_soc_phys_ram_to_gpa(unsigned long paddr)
{ {
if (paddr < uv_hub_info->lowmem_remap_top) if (paddr < uv_hub_info->lowmem_remap_top)
paddr += uv_hub_info->lowmem_remap_base; paddr |= uv_hub_info->lowmem_remap_base;
return paddr | uv_hub_info->gnode_upper; return paddr | uv_hub_info->gnode_upper;
} }
...@@ -182,19 +218,7 @@ static inline unsigned long uv_soc_phys_ram_to_gpa(unsigned long paddr) ...@@ -182,19 +218,7 @@ static inline unsigned long uv_soc_phys_ram_to_gpa(unsigned long paddr)
/* socket virtual --> UV global physical address */ /* socket virtual --> UV global physical address */
static inline unsigned long uv_gpa(void *v) static inline unsigned long uv_gpa(void *v)
{ {
return __pa(v) | uv_hub_info->gnode_upper; return uv_soc_phys_ram_to_gpa(__pa(v));
}
/* socket virtual --> UV global physical address */
static inline void *uv_vgpa(void *v)
{
return (void *)uv_gpa(v);
}
/* UV global physical address --> socket virtual */
static inline void *uv_va(unsigned long gpa)
{
return __va(gpa & uv_hub_info->gpa_mask);
} }
/* pnode, offset --> socket virtual */ /* pnode, offset --> socket virtual */
...@@ -277,6 +301,16 @@ static inline void uv_write_local_mmr(unsigned long offset, unsigned long val) ...@@ -277,6 +301,16 @@ static inline void uv_write_local_mmr(unsigned long offset, unsigned long val)
*uv_local_mmr_address(offset) = val; *uv_local_mmr_address(offset) = val;
} }
static inline unsigned char uv_read_local_mmr8(unsigned long offset)
{
return *((unsigned char *)uv_local_mmr_address(offset));
}
static inline void uv_write_local_mmr8(unsigned long offset, unsigned char val)
{
*((unsigned char *)uv_local_mmr_address(offset)) = val;
}
/* /*
* Structures and definitions for converting between cpu, node, pnode, and blade * Structures and definitions for converting between cpu, node, pnode, and blade
* numbers. * numbers.
...@@ -351,5 +385,20 @@ static inline int uv_num_possible_blades(void) ...@@ -351,5 +385,20 @@ static inline int uv_num_possible_blades(void)
return uv_possible_blades; return uv_possible_blades;
} }
#endif /* _ASM_X86_UV_UV_HUB_H */ /* Update SCIR state */
static inline void uv_set_scir_bits(unsigned char value)
{
if (uv_hub_info->scir.state != value) {
uv_hub_info->scir.state = value;
uv_write_local_mmr8(uv_hub_info->scir.offset, value);
}
}
static inline void uv_set_cpu_scir_bits(int cpu, unsigned char value)
{
if (uv_cpu_hub_info(cpu)->scir.state != value) {
uv_cpu_hub_info(cpu)->scir.state = value;
uv_write_local_mmr8(uv_cpu_hub_info(cpu)->scir.offset, value);
}
}
#endif /* _ASM_X86_UV_UV_HUB_H */
/*
* Copyright (C) 2008, VMware, Inc.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for more
* details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
*/
#ifndef ASM_X86__VMWARE_H
#define ASM_X86__VMWARE_H
extern unsigned long vmware_get_tsc_khz(void);
extern int vmware_platform(void);
extern void vmware_set_feature_bits(struct cpuinfo_x86 *c);
#endif
...@@ -33,8 +33,14 @@ ...@@ -33,8 +33,14 @@
#ifndef _ASM_X86_XEN_HYPERCALL_H #ifndef _ASM_X86_XEN_HYPERCALL_H
#define _ASM_X86_XEN_HYPERCALL_H #define _ASM_X86_XEN_HYPERCALL_H
#include <linux/kernel.h>
#include <linux/spinlock.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/types.h>
#include <asm/page.h>
#include <asm/pgtable.h>
#include <xen/interface/xen.h> #include <xen/interface/xen.h>
#include <xen/interface/sched.h> #include <xen/interface/sched.h>
......
...@@ -33,39 +33,10 @@ ...@@ -33,39 +33,10 @@
#ifndef _ASM_X86_XEN_HYPERVISOR_H #ifndef _ASM_X86_XEN_HYPERVISOR_H
#define _ASM_X86_XEN_HYPERVISOR_H #define _ASM_X86_XEN_HYPERVISOR_H
#include <linux/types.h>
#include <linux/kernel.h>
#include <xen/interface/xen.h>
#include <xen/interface/version.h>
#include <asm/ptrace.h>
#include <asm/page.h>
#include <asm/desc.h>
#if defined(__i386__)
# ifdef CONFIG_X86_PAE
# include <asm-generic/pgtable-nopud.h>
# else
# include <asm-generic/pgtable-nopmd.h>
# endif
#endif
#include <asm/xen/hypercall.h>
/* arch/i386/kernel/setup.c */ /* arch/i386/kernel/setup.c */
extern struct shared_info *HYPERVISOR_shared_info; extern struct shared_info *HYPERVISOR_shared_info;
extern struct start_info *xen_start_info; extern struct start_info *xen_start_info;
/* arch/i386/mach-xen/evtchn.c */
/* Force a proper event-channel callback from Xen. */
extern void force_evtchn_callback(void);
/* Turn jiffies into Xen system time. */
u64 jiffies_to_st(unsigned long jiffies);
#define MULTI_UVMFLAGS_INDEX 3
#define MULTI_UVMDOMID_INDEX 4
enum xen_domain_type { enum xen_domain_type {
XEN_NATIVE, XEN_NATIVE,
XEN_PV_DOMAIN, XEN_PV_DOMAIN,
...@@ -74,9 +45,15 @@ enum xen_domain_type { ...@@ -74,9 +45,15 @@ enum xen_domain_type {
extern enum xen_domain_type xen_domain_type; extern enum xen_domain_type xen_domain_type;
#ifdef CONFIG_XEN
#define xen_domain() (xen_domain_type != XEN_NATIVE) #define xen_domain() (xen_domain_type != XEN_NATIVE)
#define xen_pv_domain() (xen_domain_type == XEN_PV_DOMAIN) #else
#define xen_domain() (0)
#endif
#define xen_pv_domain() (xen_domain() && xen_domain_type == XEN_PV_DOMAIN)
#define xen_hvm_domain() (xen_domain() && xen_domain_type == XEN_HVM_DOMAIN)
#define xen_initial_domain() (xen_pv_domain() && xen_start_info->flags & SIF_INITDOMAIN) #define xen_initial_domain() (xen_pv_domain() && xen_start_info->flags & SIF_INITDOMAIN)
#define xen_hvm_domain() (xen_domain_type == XEN_HVM_DOMAIN)
#endif /* _ASM_X86_XEN_HYPERVISOR_H */ #endif /* _ASM_X86_XEN_HYPERVISOR_H */
#ifndef _ASM_X86_XEN_PAGE_H #ifndef _ASM_X86_XEN_PAGE_H
#define _ASM_X86_XEN_PAGE_H #define _ASM_X86_XEN_PAGE_H
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/spinlock.h>
#include <linux/pfn.h> #include <linux/pfn.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/page.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <xen/interface/xen.h>
#include <xen/features.h> #include <xen/features.h>
/* Xen machine address */ /* Xen machine address */
......
...@@ -12,6 +12,7 @@ CFLAGS_REMOVE_tsc.o = -pg ...@@ -12,6 +12,7 @@ CFLAGS_REMOVE_tsc.o = -pg
CFLAGS_REMOVE_rtc.o = -pg CFLAGS_REMOVE_rtc.o = -pg
CFLAGS_REMOVE_paravirt-spinlocks.o = -pg CFLAGS_REMOVE_paravirt-spinlocks.o = -pg
CFLAGS_REMOVE_ftrace.o = -pg CFLAGS_REMOVE_ftrace.o = -pg
CFLAGS_REMOVE_early_printk.o = -pg
endif endif
# #
...@@ -23,9 +24,9 @@ CFLAGS_vsyscall_64.o := $(PROFILING) -g0 $(nostackp) ...@@ -23,9 +24,9 @@ CFLAGS_vsyscall_64.o := $(PROFILING) -g0 $(nostackp)
CFLAGS_hpet.o := $(nostackp) CFLAGS_hpet.o := $(nostackp)
CFLAGS_tsc.o := $(nostackp) CFLAGS_tsc.o := $(nostackp)
obj-y := process_$(BITS).o signal_$(BITS).o entry_$(BITS).o obj-y := process_$(BITS).o signal.o entry_$(BITS).o
obj-y += traps.o irq.o irq_$(BITS).o dumpstack_$(BITS).o obj-y += traps.o irq.o irq_$(BITS).o dumpstack_$(BITS).o
obj-y += time_$(BITS).o ioport.o ldt.o obj-y += time_$(BITS).o ioport.o ldt.o dumpstack.o
obj-y += setup.o i8259.o irqinit_$(BITS).o setup_percpu.o obj-y += setup.o i8259.o irqinit_$(BITS).o setup_percpu.o
obj-$(CONFIG_X86_VISWS) += visws_quirks.o obj-$(CONFIG_X86_VISWS) += visws_quirks.o
obj-$(CONFIG_X86_32) += probe_roms_32.o obj-$(CONFIG_X86_32) += probe_roms_32.o
...@@ -105,6 +106,8 @@ microcode-$(CONFIG_MICROCODE_INTEL) += microcode_intel.o ...@@ -105,6 +106,8 @@ microcode-$(CONFIG_MICROCODE_INTEL) += microcode_intel.o
microcode-$(CONFIG_MICROCODE_AMD) += microcode_amd.o microcode-$(CONFIG_MICROCODE_AMD) += microcode_amd.o
obj-$(CONFIG_MICROCODE) += microcode.o obj-$(CONFIG_MICROCODE) += microcode.o
obj-$(CONFIG_X86_CHECK_BIOS_CORRUPTION) += check.o
### ###
# 64 bit specific files # 64 bit specific files
ifeq ($(CONFIG_X86_64),y) ifeq ($(CONFIG_X86_64),y)
......
...@@ -1360,6 +1360,17 @@ static void __init acpi_process_madt(void) ...@@ -1360,6 +1360,17 @@ static void __init acpi_process_madt(void)
disable_acpi(); disable_acpi();
} }
} }
/*
* ACPI supports both logical (e.g. Hyper-Threading) and physical
* processors, where MPS only supports physical.
*/
if (acpi_lapic && acpi_ioapic)
printk(KERN_INFO "Using ACPI (MADT) for SMP configuration "
"information\n");
else if (acpi_lapic)
printk(KERN_INFO "Using ACPI for processor (LAPIC) "
"configuration information\n");
#endif #endif
return; return;
} }
......
...@@ -24,6 +24,7 @@ ...@@ -24,6 +24,7 @@
#include <linux/iommu-helper.h> #include <linux/iommu-helper.h>
#include <asm/proto.h> #include <asm/proto.h>
#include <asm/iommu.h> #include <asm/iommu.h>
#include <asm/gart.h>
#include <asm/amd_iommu_types.h> #include <asm/amd_iommu_types.h>
#include <asm/amd_iommu.h> #include <asm/amd_iommu.h>
...@@ -235,8 +236,9 @@ static int iommu_completion_wait(struct amd_iommu *iommu) ...@@ -235,8 +236,9 @@ static int iommu_completion_wait(struct amd_iommu *iommu)
status &= ~MMIO_STATUS_COM_WAIT_INT_MASK; status &= ~MMIO_STATUS_COM_WAIT_INT_MASK;
writel(status, iommu->mmio_base + MMIO_STATUS_OFFSET); writel(status, iommu->mmio_base + MMIO_STATUS_OFFSET);
if (unlikely((i == EXIT_LOOP_COUNT) && printk_ratelimit())) if (unlikely(i == EXIT_LOOP_COUNT))
printk(KERN_WARNING "AMD IOMMU: Completion wait loop failed\n"); panic("AMD IOMMU: Completion wait loop failed\n");
out: out:
spin_unlock_irqrestore(&iommu->lock, flags); spin_unlock_irqrestore(&iommu->lock, flags);
......
...@@ -28,6 +28,7 @@ ...@@ -28,6 +28,7 @@
#include <asm/amd_iommu_types.h> #include <asm/amd_iommu_types.h>
#include <asm/amd_iommu.h> #include <asm/amd_iommu.h>
#include <asm/iommu.h> #include <asm/iommu.h>
#include <asm/gart.h>
/* /*
* definitions for the ACPI scanning code * definitions for the ACPI scanning code
...@@ -427,6 +428,10 @@ static u8 * __init alloc_command_buffer(struct amd_iommu *iommu) ...@@ -427,6 +428,10 @@ static u8 * __init alloc_command_buffer(struct amd_iommu *iommu)
memcpy_toio(iommu->mmio_base + MMIO_CMD_BUF_OFFSET, memcpy_toio(iommu->mmio_base + MMIO_CMD_BUF_OFFSET,
&entry, sizeof(entry)); &entry, sizeof(entry));
/* set head and tail to zero manually */
writel(0x00, iommu->mmio_base + MMIO_CMD_HEAD_OFFSET);
writel(0x00, iommu->mmio_base + MMIO_CMD_TAIL_OFFSET);
iommu_feature_enable(iommu, CONTROL_CMDBUF_EN); iommu_feature_enable(iommu, CONTROL_CMDBUF_EN);
return cmd_buf; return cmd_buf;
...@@ -1074,7 +1079,8 @@ int __init amd_iommu_init(void) ...@@ -1074,7 +1079,8 @@ int __init amd_iommu_init(void)
goto free; goto free;
/* IOMMU rlookup table - find the IOMMU for a specific device */ /* IOMMU rlookup table - find the IOMMU for a specific device */
amd_iommu_rlookup_table = (void *)__get_free_pages(GFP_KERNEL, amd_iommu_rlookup_table = (void *)__get_free_pages(
GFP_KERNEL | __GFP_ZERO,
get_order(rlookup_table_size)); get_order(rlookup_table_size));
if (amd_iommu_rlookup_table == NULL) if (amd_iommu_rlookup_table == NULL)
goto free; goto free;
......
/* /*
* Firmware replacement code. * Firmware replacement code.
* *
* Work around broken BIOSes that don't set an aperture or only set the * Work around broken BIOSes that don't set an aperture, only set the
* aperture in the AGP bridge. * aperture in the AGP bridge, or set too small aperture.
*
* If all fails map the aperture over some low memory. This is cheaper than * If all fails map the aperture over some low memory. This is cheaper than
* doing bounce buffering. The memory is lost. This is done at early boot * doing bounce buffering. The memory is lost. This is done at early boot
* because only the bootmem allocator can allocate 32+MB. * because only the bootmem allocator can allocate 32+MB.
......
This diff is collapsed.
...@@ -391,11 +391,7 @@ static int power_off; ...@@ -391,11 +391,7 @@ static int power_off;
#else #else
static int power_off = 1; static int power_off = 1;
#endif #endif
#ifdef CONFIG_APM_REAL_MODE_POWER_OFF
static int realmode_power_off = 1;
#else
static int realmode_power_off; static int realmode_power_off;
#endif
#ifdef CONFIG_APM_ALLOW_INTS #ifdef CONFIG_APM_ALLOW_INTS
static int allow_ints = 1; static int allow_ints = 1;
#else #else
......
...@@ -11,7 +11,7 @@ ...@@ -11,7 +11,7 @@
#include <linux/suspend.h> #include <linux/suspend.h>
#include <linux/kbuild.h> #include <linux/kbuild.h>
#include <asm/ucontext.h> #include <asm/ucontext.h>
#include "sigframe.h" #include <asm/sigframe.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/fixmap.h> #include <asm/fixmap.h>
#include <asm/processor.h> #include <asm/processor.h>
......
...@@ -20,6 +20,8 @@ ...@@ -20,6 +20,8 @@
#include <xen/interface/xen.h> #include <xen/interface/xen.h>
#include <asm/sigframe.h>
#define __NO_STUBS 1 #define __NO_STUBS 1
#undef __SYSCALL #undef __SYSCALL
#undef _ASM_X86_UNISTD_64_H #undef _ASM_X86_UNISTD_64_H
...@@ -87,7 +89,7 @@ int main(void) ...@@ -87,7 +89,7 @@ int main(void)
BLANK(); BLANK();
#undef ENTRY #undef ENTRY
DEFINE(IA32_RT_SIGFRAME_sigcontext, DEFINE(IA32_RT_SIGFRAME_sigcontext,
offsetof (struct rt_sigframe32, uc.uc_mcontext)); offsetof (struct rt_sigframe_ia32, uc.uc_mcontext));
BLANK(); BLANK();
#endif #endif
DEFINE(pbe_address, offsetof(struct pbe, address)); DEFINE(pbe_address, offsetof(struct pbe, address));
......
...@@ -69,10 +69,10 @@ s64 uv_bios_call_reentrant(enum uv_bios_cmd which, u64 a1, u64 a2, u64 a3, ...@@ -69,10 +69,10 @@ s64 uv_bios_call_reentrant(enum uv_bios_cmd which, u64 a1, u64 a2, u64 a3,
long sn_partition_id; long sn_partition_id;
EXPORT_SYMBOL_GPL(sn_partition_id); EXPORT_SYMBOL_GPL(sn_partition_id);
long uv_coherency_id; long sn_coherency_id;
EXPORT_SYMBOL_GPL(uv_coherency_id); EXPORT_SYMBOL_GPL(sn_coherency_id);
long uv_region_size; long sn_region_size;
EXPORT_SYMBOL_GPL(uv_region_size); EXPORT_SYMBOL_GPL(sn_region_size);
int uv_type; int uv_type;
...@@ -100,6 +100,56 @@ s64 uv_bios_get_sn_info(int fc, int *uvtype, long *partid, long *coher, ...@@ -100,6 +100,56 @@ s64 uv_bios_get_sn_info(int fc, int *uvtype, long *partid, long *coher,
return ret; return ret;
} }
int
uv_bios_mq_watchlist_alloc(int blade, unsigned long addr, unsigned int mq_size,
unsigned long *intr_mmr_offset)
{
union uv_watchlist_u size_blade;
u64 watchlist;
s64 ret;
size_blade.size = mq_size;
size_blade.blade = blade;
/*
* bios returns watchlist number or negative error number.
*/
ret = (int)uv_bios_call_irqsave(UV_BIOS_WATCHLIST_ALLOC, addr,
size_blade.val, (u64)intr_mmr_offset,
(u64)&watchlist, 0);
if (ret < BIOS_STATUS_SUCCESS)
return ret;
return watchlist;
}
EXPORT_SYMBOL_GPL(uv_bios_mq_watchlist_alloc);
int
uv_bios_mq_watchlist_free(int blade, int watchlist_num)
{
return (int)uv_bios_call_irqsave(UV_BIOS_WATCHLIST_FREE,
blade, watchlist_num, 0, 0, 0);
}
EXPORT_SYMBOL_GPL(uv_bios_mq_watchlist_free);
s64
uv_bios_change_memprotect(u64 paddr, u64 len, enum uv_memprotect perms)
{
return uv_bios_call_irqsave(UV_BIOS_MEMPROTECT, paddr, len,
perms, 0, 0);
}
EXPORT_SYMBOL_GPL(uv_bios_change_memprotect);
s64
uv_bios_reserved_page_pa(u64 buf, u64 *cookie, u64 *addr, u64 *len)
{
s64 ret;
ret = uv_bios_call_irqsave(UV_BIOS_GET_PARTITION_ADDR, (u64)cookie,
(u64)addr, buf, (u64)len, 0);
return ret;
}
EXPORT_SYMBOL_GPL(uv_bios_reserved_page_pa);
s64 uv_bios_freq_base(u64 clock_type, u64 *ticks_per_second) s64 uv_bios_freq_base(u64 clock_type, u64 *ticks_per_second)
{ {
......
#include <linux/module.h>
#include <linux/sched.h>
#include <linux/kthread.h>
#include <linux/workqueue.h>
#include <asm/e820.h>
#include <asm/proto.h>
/*
* Some BIOSes seem to corrupt the low 64k of memory during events
* like suspend/resume and unplugging an HDMI cable. Reserve all
* remaining free memory in that area and fill it with a distinct
* pattern.
*/
#define MAX_SCAN_AREAS 8
static int __read_mostly memory_corruption_check = -1;
static unsigned __read_mostly corruption_check_size = 64*1024;
static unsigned __read_mostly corruption_check_period = 60; /* seconds */
static struct e820entry scan_areas[MAX_SCAN_AREAS];
static int num_scan_areas;
static __init int set_corruption_check(char *arg)
{
char *end;
memory_corruption_check = simple_strtol(arg, &end, 10);
return (*end == 0) ? 0 : -EINVAL;
}
early_param("memory_corruption_check", set_corruption_check);
static __init int set_corruption_check_period(char *arg)
{
char *end;
corruption_check_period = simple_strtoul(arg, &end, 10);
return (*end == 0) ? 0 : -EINVAL;
}
early_param("memory_corruption_check_period", set_corruption_check_period);
static __init int set_corruption_check_size(char *arg)
{
char *end;
unsigned size;
size = memparse(arg, &end);
if (*end == '\0')
corruption_check_size = size;
return (size == corruption_check_size) ? 0 : -EINVAL;
}
early_param("memory_corruption_check_size", set_corruption_check_size);
void __init setup_bios_corruption_check(void)
{
u64 addr = PAGE_SIZE; /* assume first page is reserved anyway */
if (memory_corruption_check == -1) {
memory_corruption_check =
#ifdef CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK
1
#else
0
#endif
;
}
if (corruption_check_size == 0)
memory_corruption_check = 0;
if (!memory_corruption_check)
return;
corruption_check_size = round_up(corruption_check_size, PAGE_SIZE);
while (addr < corruption_check_size && num_scan_areas < MAX_SCAN_AREAS) {
u64 size;
addr = find_e820_area_size(addr, &size, PAGE_SIZE);
if (addr == 0)
break;
if ((addr + size) > corruption_check_size)
size = corruption_check_size - addr;
if (size == 0)
break;
e820_update_range(addr, size, E820_RAM, E820_RESERVED);
scan_areas[num_scan_areas].addr = addr;
scan_areas[num_scan_areas].size = size;
num_scan_areas++;
/* Assume we've already mapped this early memory */
memset(__va(addr), 0, size);
addr += size;
}
printk(KERN_INFO "Scanning %d areas for low memory corruption\n",
num_scan_areas);
update_e820();
}
void check_for_bios_corruption(void)
{
int i;
int corruption = 0;
if (!memory_corruption_check)
return;
for (i = 0; i < num_scan_areas; i++) {
unsigned long *addr = __va(scan_areas[i].addr);
unsigned long size = scan_areas[i].size;
for (; size; addr++, size -= sizeof(unsigned long)) {
if (!*addr)
continue;
printk(KERN_ERR "Corrupted low memory at %p (%lx phys) = %08lx\n",
addr, __pa(addr), *addr);
corruption = 1;
*addr = 0;
}
}
WARN_ONCE(corruption, KERN_ERR "Memory corruption detected in low memory\n");
}
static void check_corruption(struct work_struct *dummy);
static DECLARE_DELAYED_WORK(bios_check_work, check_corruption);
static void check_corruption(struct work_struct *dummy)
{
check_for_bios_corruption();
schedule_delayed_work(&bios_check_work,
round_jiffies_relative(corruption_check_period*HZ));
}
static int start_periodic_check_for_corruption(void)
{
if (!memory_corruption_check || corruption_check_period == 0)
return 0;
printk(KERN_INFO "Scanning for low memory corruption every %d seconds\n",
corruption_check_period);
/* First time we run the checks right away */
schedule_delayed_work(&bios_check_work, 0);
return 0;
}
module_init(start_periodic_check_for_corruption);
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
obj-y := intel_cacheinfo.o addon_cpuid_features.o obj-y := intel_cacheinfo.o addon_cpuid_features.o
obj-y += proc.o capflags.o powerflags.o common.o obj-y += proc.o capflags.o powerflags.o common.o
obj-y += vmware.o hypervisor.o
obj-$(CONFIG_X86_32) += bugs.o cmpxchg.o obj-$(CONFIG_X86_32) += bugs.o cmpxchg.o
obj-$(CONFIG_X86_64) += bugs_64.o obj-$(CONFIG_X86_64) += bugs_64.o
......
...@@ -120,9 +120,17 @@ void __cpuinit detect_extended_topology(struct cpuinfo_x86 *c) ...@@ -120,9 +120,17 @@ void __cpuinit detect_extended_topology(struct cpuinfo_x86 *c)
c->cpu_core_id = phys_pkg_id(c->initial_apicid, ht_mask_width) c->cpu_core_id = phys_pkg_id(c->initial_apicid, ht_mask_width)
& core_select_mask; & core_select_mask;
c->phys_proc_id = phys_pkg_id(c->initial_apicid, core_plus_mask_width); c->phys_proc_id = phys_pkg_id(c->initial_apicid, core_plus_mask_width);
/*
* Reinit the apicid, now that we have extended initial_apicid.
*/
c->apicid = phys_pkg_id(c->initial_apicid, 0);
#else #else
c->cpu_core_id = phys_pkg_id(ht_mask_width) & core_select_mask; c->cpu_core_id = phys_pkg_id(ht_mask_width) & core_select_mask;
c->phys_proc_id = phys_pkg_id(core_plus_mask_width); c->phys_proc_id = phys_pkg_id(core_plus_mask_width);
/*
* Reinit the apicid, now that we have extended initial_apicid.
*/
c->apicid = phys_pkg_id(0);
#endif #endif
c->x86_max_cores = (core_level_siblings / smp_num_siblings); c->x86_max_cores = (core_level_siblings / smp_num_siblings);
......
...@@ -283,9 +283,14 @@ static void __cpuinit early_init_amd(struct cpuinfo_x86 *c) ...@@ -283,9 +283,14 @@ static void __cpuinit early_init_amd(struct cpuinfo_x86 *c)
{ {
early_init_amd_mc(c); early_init_amd_mc(c);
/* c->x86_power is 8000_0007 edx. Bit 8 is constant TSC */ /*
if (c->x86_power & (1<<8)) * c->x86_power is 8000_0007 edx. Bit 8 is TSC runs at constant rate
* with P/T states and does not stop in deep C-states
*/
if (c->x86_power & (1 << 8)) {
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC); set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
}
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
set_cpu_cap(c, X86_FEATURE_SYSCALL32); set_cpu_cap(c, X86_FEATURE_SYSCALL32);
......
...@@ -36,6 +36,7 @@ ...@@ -36,6 +36,7 @@
#include <asm/proto.h> #include <asm/proto.h>
#include <asm/sections.h> #include <asm/sections.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/hypervisor.h>
#include "cpu.h" #include "cpu.h"
...@@ -703,6 +704,7 @@ static void __cpuinit identify_cpu(struct cpuinfo_x86 *c) ...@@ -703,6 +704,7 @@ static void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
detect_ht(c); detect_ht(c);
#endif #endif
init_hypervisor(c);
/* /*
* On SMP, boot_cpu_data holds the common feature set between * On SMP, boot_cpu_data holds the common feature set between
* all CPUs; so make sure that we indicate which features are * all CPUs; so make sure that we indicate which features are
......
/*
* Common hypervisor code
*
* Copyright (C) 2008, VMware, Inc.
* Author : Alok N Kataria <akataria@vmware.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for more
* details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
*/
#include <asm/processor.h>
#include <asm/vmware.h>
#include <asm/hypervisor.h>
static inline void __cpuinit
detect_hypervisor_vendor(struct cpuinfo_x86 *c)
{
if (vmware_platform()) {
c->x86_hyper_vendor = X86_HYPER_VENDOR_VMWARE;
} else {
c->x86_hyper_vendor = X86_HYPER_VENDOR_NONE;
}
}
unsigned long get_hypervisor_tsc_freq(void)
{
if (boot_cpu_data.x86_hyper_vendor == X86_HYPER_VENDOR_VMWARE)
return vmware_get_tsc_khz();
return 0;
}
static inline void __cpuinit
hypervisor_set_feature_bits(struct cpuinfo_x86 *c)
{
if (boot_cpu_data.x86_hyper_vendor == X86_HYPER_VENDOR_VMWARE) {
vmware_set_feature_bits(c);
return;
}
}
void __cpuinit init_hypervisor(struct cpuinfo_x86 *c)
{
detect_hypervisor_vendor(c);
hypervisor_set_feature_bits(c);
}
...@@ -41,6 +41,16 @@ static void __cpuinit early_init_intel(struct cpuinfo_x86 *c) ...@@ -41,6 +41,16 @@ static void __cpuinit early_init_intel(struct cpuinfo_x86 *c)
if (c->x86 == 15 && c->x86_cache_alignment == 64) if (c->x86 == 15 && c->x86_cache_alignment == 64)
c->x86_cache_alignment = 128; c->x86_cache_alignment = 128;
#endif #endif
/*
* c->x86_power is 8000_0007 edx. Bit 8 is TSC runs at constant rate
* with P/T states and does not stop in deep C-states
*/
if (c->x86_power & (1 << 8)) {
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
}
} }
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
...@@ -242,6 +252,13 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c) ...@@ -242,6 +252,13 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
intel_workarounds(c); intel_workarounds(c);
/*
* Detect the extended topology information if available. This
* will reinitialise the initial_apicid which will be used
* in init_intel_cacheinfo()
*/
detect_extended_topology(c);
l2 = init_intel_cacheinfo(c); l2 = init_intel_cacheinfo(c);
if (c->cpuid_level > 9) { if (c->cpuid_level > 9) {
unsigned eax = cpuid_eax(10); unsigned eax = cpuid_eax(10);
...@@ -307,13 +324,11 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c) ...@@ -307,13 +324,11 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
set_cpu_cap(c, X86_FEATURE_P4); set_cpu_cap(c, X86_FEATURE_P4);
if (c->x86 == 6) if (c->x86 == 6)
set_cpu_cap(c, X86_FEATURE_P3); set_cpu_cap(c, X86_FEATURE_P3);
#endif
if (cpu_has_bts) if (cpu_has_bts)
ptrace_bts_init_intel(c); ptrace_bts_init_intel(c);
#endif
detect_extended_topology(c);
if (!cpu_has(c, X86_FEATURE_XTOPOLOGY)) { if (!cpu_has(c, X86_FEATURE_XTOPOLOGY)) {
/* /*
* let's use the legacy cpuid vector 0x1 and 0x4 for topology * let's use the legacy cpuid vector 0x1 and 0x4 for topology
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment