Commit 019abbc8 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'x86-stage-3-for-linus' of...

Merge branch 'x86-stage-3-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip

* 'x86-stage-3-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (190 commits)
  Revert "cpuacct: reduce one NULL check in fast-path"
  Revert "x86: don't compile vsmp_64 for 32bit"
  x86: Correct behaviour of irq affinity
  x86: early_ioremap_init(), use __fix_to_virt(), because we are sure it's safe
  x86: use default_cpu_mask_to_apicid for 64bit
  x86: fix set_extra_move_desc calling
  x86, PAT, PCI: Change vma prot in pci_mmap to reflect inherited prot
  x86/dmi: fix dmi_alloc() section mismatches
  x86: e820 fix various signedness issues in setup.c and e820.c
  x86: apic/io_apic.c define msi_ir_chip and ir_ioapic_chip all the time
  x86: irq.c keep CONFIG_X86_LOCAL_APIC interrupts together
  x86: irq.c use same path for show_interrupts
  x86: cpu/cpu.h cleanup
  x86: Fix a couple of sparse warnings in arch/x86/kernel/apic/io_apic.c
  Revert "x86: create a non-zero sized bm_pte only when needed"
  x86: pci-nommu.c cleanup
  x86: io_delay.c cleanup
  x86: rtc.c cleanup
  x86: i8253 cleanup
  x86: kdebugfs.c cleanup
  ...
parents 2d25ee36 5a3c8fe7
Mini-HOWTO for using the earlyprintk=dbgp boot option with a
USB2 Debug port key and a debug cable, on x86 systems.
You need two computers, the 'USB debug key' special gadget and
and two USB cables, connected like this:
[host/target] <-------> [USB debug key] <-------> [client/console]
1. There are three specific hardware requirements:
a.) Host/target system needs to have USB debug port capability.
You can check this capability by looking at a 'Debug port' bit in
the lspci -vvv output:
# lspci -vvv
...
00:1d.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 03) (prog-if 20 [EHCI])
Subsystem: Lenovo ThinkPad T61
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin D routed to IRQ 19
Region 0: Memory at fe227000 (32-bit, non-prefetchable) [size=1K]
Capabilities: [50] Power Management version 2
Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 PME-Enable- DSel=0 DScale=0 PME+
Capabilities: [58] Debug port: BAR=1 offset=00a0
^^^^^^^^^^^ <==================== [ HERE ]
Kernel driver in use: ehci_hcd
Kernel modules: ehci-hcd
...
( If your system does not list a debug port capability then you probably
wont be able to use the USB debug key. )
b.) You also need a Netchip USB debug cable/key:
http://www.plxtech.com/products/NET2000/NET20DC/default.asp
This is a small blue plastic connector with two USB connections,
it draws power from its USB connections.
c.) Thirdly, you need a second client/console system with a regular USB port.
2. Software requirements:
a.) On the host/target system:
You need to enable the following kernel config option:
CONFIG_EARLY_PRINTK_DBGP=y
And you need to add the boot command line: "earlyprintk=dbgp".
(If you are using Grub, append it to the 'kernel' line in
/etc/grub.conf)
NOTE: normally earlyprintk console gets turned off once the
regular console is alive - use "earlyprintk=dbgp,keep" to keep
this channel open beyond early bootup. This can be useful for
debugging crashes under Xorg, etc.
b.) On the client/console system:
You should enable the following kernel config option:
CONFIG_USB_SERIAL_DEBUG=y
On the next bootup with the modified kernel you should
get a /dev/ttyUSBx device(s).
Now this channel of kernel messages is ready to be used: start
your favorite terminal emulator (minicom, etc.) and set
it up to use /dev/ttyUSB0 - or use a raw 'cat /dev/ttyUSBx' to
see the raw output.
c.) On Nvidia Southbridge based systems: the kernel will try to probe
and find out which port has debug device connected.
3. Testing that it works fine:
You can test the output by using earlyprintk=dbgp,keep and provoking
kernel messages on the host/target system. You can provoke a harmless
kernel message by for example doing:
echo h > /proc/sysrq-trigger
On the host/target system you should see this help line in "dmesg" output:
SysRq : HELP : loglevel(0-9) reBoot Crashdump terminate-all-tasks(E) memory-full-oom-kill(F) kill-all-tasks(I) saK show-backtrace-all-active-cpus(L) show-memory-usage(M) nice-all-RT-tasks(N) powerOff show-registers(P) show-all-timers(Q) unRaw Sync show-task-states(T) Unmount show-blocked-tasks(W) dump-ftrace-buffer(Z)
On the client/console system do:
cat /dev/ttyUSB0
And you should see the help line above displayed shortly after you've
provoked it on the host system.
If it does not work then please ask about it on the linux-kernel@vger.kernel.org
mailing list or contact the x86 maintainers.
...@@ -786,6 +786,11 @@ config X86_MCE_AMD ...@@ -786,6 +786,11 @@ config X86_MCE_AMD
Additional support for AMD specific MCE features such as Additional support for AMD specific MCE features such as
the DRAM Error Threshold. the DRAM Error Threshold.
config X86_MCE_THRESHOLD
depends on X86_MCE_AMD || X86_MCE_INTEL
bool
default y
config X86_MCE_NONFATAL config X86_MCE_NONFATAL
tristate "Check for non-fatal errors on AMD Athlon/Duron / Intel Pentium 4" tristate "Check for non-fatal errors on AMD Athlon/Duron / Intel Pentium 4"
depends on X86_32 && X86_MCE depends on X86_32 && X86_MCE
...@@ -929,6 +934,12 @@ config X86_CPUID ...@@ -929,6 +934,12 @@ config X86_CPUID
with major 203 and minors 0 to 31 for /dev/cpu/0/cpuid to with major 203 and minors 0 to 31 for /dev/cpu/0/cpuid to
/dev/cpu/31/cpuid. /dev/cpu/31/cpuid.
config X86_CPU_DEBUG
tristate "/sys/kernel/debug/x86/cpu/* - CPU Debug support"
---help---
If you select this option, this will provide various x86 CPUs
information through debugfs.
choice choice
prompt "High Memory Support" prompt "High Memory Support"
default HIGHMEM4G if !X86_NUMAQ default HIGHMEM4G if !X86_NUMAQ
...@@ -1121,7 +1132,7 @@ config NUMA_EMU ...@@ -1121,7 +1132,7 @@ config NUMA_EMU
config NODES_SHIFT config NODES_SHIFT
int "Maximum NUMA Nodes (as a power of 2)" if !MAXSMP int "Maximum NUMA Nodes (as a power of 2)" if !MAXSMP
range 1 9 if X86_64 range 1 9
default "9" if MAXSMP default "9" if MAXSMP
default "6" if X86_64 default "6" if X86_64
default "4" if X86_NUMAQ default "4" if X86_NUMAQ
...@@ -1429,7 +1440,7 @@ config CRASH_DUMP ...@@ -1429,7 +1440,7 @@ config CRASH_DUMP
config KEXEC_JUMP config KEXEC_JUMP
bool "kexec jump (EXPERIMENTAL)" bool "kexec jump (EXPERIMENTAL)"
depends on EXPERIMENTAL depends on EXPERIMENTAL
depends on KEXEC && HIBERNATION && X86_32 depends on KEXEC && HIBERNATION
---help--- ---help---
Jump between original kernel and kexeced kernel and invoke Jump between original kernel and kexeced kernel and invoke
code in physical address mode via KEXEC code in physical address mode via KEXEC
......
...@@ -456,24 +456,9 @@ config CPU_SUP_AMD ...@@ -456,24 +456,9 @@ config CPU_SUP_AMD
If unsure, say N. If unsure, say N.
config CPU_SUP_CENTAUR_32 config CPU_SUP_CENTAUR
default y default y
bool "Support Centaur processors" if PROCESSOR_SELECT bool "Support Centaur processors" if PROCESSOR_SELECT
depends on !64BIT
---help---
This enables detection, tunings and quirks for Centaur processors
You need this enabled if you want your kernel to run on a
Centaur CPU. Disabling this option on other types of CPUs
makes the kernel a tiny bit smaller. Disabling it on a Centaur
CPU might render the kernel unbootable.
If unsure, say N.
config CPU_SUP_CENTAUR_64
default y
bool "Support Centaur processors" if PROCESSOR_SELECT
depends on 64BIT
---help--- ---help---
This enables detection, tunings and quirks for Centaur processors This enables detection, tunings and quirks for Centaur processors
......
...@@ -153,34 +153,23 @@ endif ...@@ -153,34 +153,23 @@ endif
boot := arch/x86/boot boot := arch/x86/boot
PHONY += zImage bzImage compressed zlilo bzlilo \ BOOT_TARGETS = bzlilo bzdisk fdimage fdimage144 fdimage288 isoimage install
zdisk bzdisk fdimage fdimage144 fdimage288 isoimage install
PHONY += bzImage $(BOOT_TARGETS)
# Default kernel to build # Default kernel to build
all: bzImage all: bzImage
# KBUILD_IMAGE specify target image being built # KBUILD_IMAGE specify target image being built
KBUILD_IMAGE := $(boot)/bzImage KBUILD_IMAGE := $(boot)/bzImage
zImage zlilo zdisk: KBUILD_IMAGE := $(boot)/zImage
zImage bzImage: vmlinux bzImage: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE) $(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE)
$(Q)mkdir -p $(objtree)/arch/$(UTS_MACHINE)/boot $(Q)mkdir -p $(objtree)/arch/$(UTS_MACHINE)/boot
$(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/$(UTS_MACHINE)/boot/$@ $(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/$(UTS_MACHINE)/boot/$@
compressed: zImage $(BOOT_TARGETS): vmlinux
$(Q)$(MAKE) $(build)=$(boot) $@
zlilo bzlilo: vmlinux
$(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) zlilo
zdisk bzdisk: vmlinux
$(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) zdisk
fdimage fdimage144 fdimage288 isoimage: vmlinux
$(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) $@
install:
$(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) install
PHONY += vdso_install PHONY += vdso_install
vdso_install: vdso_install:
...@@ -205,7 +194,3 @@ define archhelp ...@@ -205,7 +194,3 @@ define archhelp
echo ' FDARGS="..." arguments for the booted kernel' echo ' FDARGS="..." arguments for the booted kernel'
echo ' FDINITRD=file initrd for the booted kernel' echo ' FDINITRD=file initrd for the booted kernel'
endef endef
CLEAN_FILES += arch/x86/boot/fdimage \
arch/x86/boot/image.iso \
arch/x86/boot/mtools.conf
...@@ -6,6 +6,7 @@ ...@@ -6,6 +6,7 @@
# for more details. # for more details.
# #
# Copyright (C) 1994 by Linus Torvalds # Copyright (C) 1994 by Linus Torvalds
# Changed by many, many contributors over the years.
# #
# ROOT_DEV specifies the default root-device when making the image. # ROOT_DEV specifies the default root-device when making the image.
...@@ -21,11 +22,8 @@ ROOT_DEV := CURRENT ...@@ -21,11 +22,8 @@ ROOT_DEV := CURRENT
SVGA_MODE := -DSVGA_MODE=NORMAL_VGA SVGA_MODE := -DSVGA_MODE=NORMAL_VGA
# If you want the RAM disk device, define this to be the size in blocks. targets := vmlinux.bin setup.bin setup.elf bzImage
targets += fdimage fdimage144 fdimage288 image.iso mtools.conf
#RAMDISK := -DRAMDISK=512
targets := vmlinux.bin setup.bin setup.elf zImage bzImage
subdir- := compressed subdir- := compressed
setup-y += a20.o cmdline.o copy.o cpu.o cpucheck.o edd.o setup-y += a20.o cmdline.o copy.o cpu.o cpucheck.o edd.o
...@@ -71,17 +69,13 @@ KBUILD_CFLAGS := $(LINUXINCLUDE) -g -Os -D_SETUP -D__KERNEL__ \ ...@@ -71,17 +69,13 @@ KBUILD_CFLAGS := $(LINUXINCLUDE) -g -Os -D_SETUP -D__KERNEL__ \
KBUILD_CFLAGS += $(call cc-option,-m32) KBUILD_CFLAGS += $(call cc-option,-m32)
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
$(obj)/zImage: asflags-y := $(SVGA_MODE) $(RAMDISK) $(obj)/bzImage: asflags-y := $(SVGA_MODE)
$(obj)/bzImage: ccflags-y := -D__BIG_KERNEL__
$(obj)/bzImage: asflags-y := $(SVGA_MODE) $(RAMDISK) -D__BIG_KERNEL__
$(obj)/bzImage: BUILDFLAGS := -b
quiet_cmd_image = BUILD $@ quiet_cmd_image = BUILD $@
cmd_image = $(obj)/tools/build $(BUILDFLAGS) $(obj)/setup.bin \ cmd_image = $(obj)/tools/build $(obj)/setup.bin $(obj)/vmlinux.bin \
$(obj)/vmlinux.bin $(ROOT_DEV) > $@ $(ROOT_DEV) > $@
$(obj)/zImage $(obj)/bzImage: $(obj)/setup.bin \ $(obj)/bzImage: $(obj)/setup.bin $(obj)/vmlinux.bin $(obj)/tools/build FORCE
$(obj)/vmlinux.bin $(obj)/tools/build FORCE
$(call if_changed,image) $(call if_changed,image)
@echo 'Kernel: $@ is ready' ' (#'`cat .version`')' @echo 'Kernel: $@ is ready' ' (#'`cat .version`')'
...@@ -116,9 +110,11 @@ $(obj)/setup.bin: $(obj)/setup.elf FORCE ...@@ -116,9 +110,11 @@ $(obj)/setup.bin: $(obj)/setup.elf FORCE
$(obj)/compressed/vmlinux: FORCE $(obj)/compressed/vmlinux: FORCE
$(Q)$(MAKE) $(build)=$(obj)/compressed $@ $(Q)$(MAKE) $(build)=$(obj)/compressed $@
# Set this if you want to pass append arguments to the zdisk/fdimage/isoimage kernel # Set this if you want to pass append arguments to the
# bzdisk/fdimage/isoimage kernel
FDARGS = FDARGS =
# Set this if you want an initrd included with the zdisk/fdimage/isoimage kernel # Set this if you want an initrd included with the
# bzdisk/fdimage/isoimage kernel
FDINITRD = FDINITRD =
image_cmdline = default linux $(FDARGS) $(if $(FDINITRD),initrd=initrd.img,) image_cmdline = default linux $(FDARGS) $(if $(FDINITRD),initrd=initrd.img,)
...@@ -127,7 +123,7 @@ $(obj)/mtools.conf: $(src)/mtools.conf.in ...@@ -127,7 +123,7 @@ $(obj)/mtools.conf: $(src)/mtools.conf.in
sed -e 's|@OBJ@|$(obj)|g' < $< > $@ sed -e 's|@OBJ@|$(obj)|g' < $< > $@
# This requires write access to /dev/fd0 # This requires write access to /dev/fd0
zdisk: $(BOOTIMAGE) $(obj)/mtools.conf bzdisk: $(obj)/bzImage $(obj)/mtools.conf
MTOOLSRC=$(obj)/mtools.conf mformat a: ; sync MTOOLSRC=$(obj)/mtools.conf mformat a: ; sync
syslinux /dev/fd0 ; sync syslinux /dev/fd0 ; sync
echo '$(image_cmdline)' | \ echo '$(image_cmdline)' | \
...@@ -135,10 +131,10 @@ zdisk: $(BOOTIMAGE) $(obj)/mtools.conf ...@@ -135,10 +131,10 @@ zdisk: $(BOOTIMAGE) $(obj)/mtools.conf
if [ -f '$(FDINITRD)' ] ; then \ if [ -f '$(FDINITRD)' ] ; then \
MTOOLSRC=$(obj)/mtools.conf mcopy '$(FDINITRD)' a:initrd.img ; \ MTOOLSRC=$(obj)/mtools.conf mcopy '$(FDINITRD)' a:initrd.img ; \
fi fi
MTOOLSRC=$(obj)/mtools.conf mcopy $(BOOTIMAGE) a:linux ; sync MTOOLSRC=$(obj)/mtools.conf mcopy $(obj)/bzImage a:linux ; sync
# These require being root or having syslinux 2.02 or higher installed # These require being root or having syslinux 2.02 or higher installed
fdimage fdimage144: $(BOOTIMAGE) $(obj)/mtools.conf fdimage fdimage144: $(obj)/bzImage $(obj)/mtools.conf
dd if=/dev/zero of=$(obj)/fdimage bs=1024 count=1440 dd if=/dev/zero of=$(obj)/fdimage bs=1024 count=1440
MTOOLSRC=$(obj)/mtools.conf mformat v: ; sync MTOOLSRC=$(obj)/mtools.conf mformat v: ; sync
syslinux $(obj)/fdimage ; sync syslinux $(obj)/fdimage ; sync
...@@ -147,9 +143,9 @@ fdimage fdimage144: $(BOOTIMAGE) $(obj)/mtools.conf ...@@ -147,9 +143,9 @@ fdimage fdimage144: $(BOOTIMAGE) $(obj)/mtools.conf
if [ -f '$(FDINITRD)' ] ; then \ if [ -f '$(FDINITRD)' ] ; then \
MTOOLSRC=$(obj)/mtools.conf mcopy '$(FDINITRD)' v:initrd.img ; \ MTOOLSRC=$(obj)/mtools.conf mcopy '$(FDINITRD)' v:initrd.img ; \
fi fi
MTOOLSRC=$(obj)/mtools.conf mcopy $(BOOTIMAGE) v:linux ; sync MTOOLSRC=$(obj)/mtools.conf mcopy $(obj)/bzImage v:linux ; sync
fdimage288: $(BOOTIMAGE) $(obj)/mtools.conf fdimage288: $(obj)/bzImage $(obj)/mtools.conf
dd if=/dev/zero of=$(obj)/fdimage bs=1024 count=2880 dd if=/dev/zero of=$(obj)/fdimage bs=1024 count=2880
MTOOLSRC=$(obj)/mtools.conf mformat w: ; sync MTOOLSRC=$(obj)/mtools.conf mformat w: ; sync
syslinux $(obj)/fdimage ; sync syslinux $(obj)/fdimage ; sync
...@@ -158,9 +154,9 @@ fdimage288: $(BOOTIMAGE) $(obj)/mtools.conf ...@@ -158,9 +154,9 @@ fdimage288: $(BOOTIMAGE) $(obj)/mtools.conf
if [ -f '$(FDINITRD)' ] ; then \ if [ -f '$(FDINITRD)' ] ; then \
MTOOLSRC=$(obj)/mtools.conf mcopy '$(FDINITRD)' w:initrd.img ; \ MTOOLSRC=$(obj)/mtools.conf mcopy '$(FDINITRD)' w:initrd.img ; \
fi fi
MTOOLSRC=$(obj)/mtools.conf mcopy $(BOOTIMAGE) w:linux ; sync MTOOLSRC=$(obj)/mtools.conf mcopy $(obj)/bzImage w:linux ; sync
isoimage: $(BOOTIMAGE) isoimage: $(obj)/bzImage
-rm -rf $(obj)/isoimage -rm -rf $(obj)/isoimage
mkdir $(obj)/isoimage mkdir $(obj)/isoimage
for i in lib lib64 share end ; do \ for i in lib lib64 share end ; do \
...@@ -170,7 +166,7 @@ isoimage: $(BOOTIMAGE) ...@@ -170,7 +166,7 @@ isoimage: $(BOOTIMAGE)
fi ; \ fi ; \
if [ $$i = end ] ; then exit 1 ; fi ; \ if [ $$i = end ] ; then exit 1 ; fi ; \
done done
cp $(BOOTIMAGE) $(obj)/isoimage/linux cp $(obj)/bzImage $(obj)/isoimage/linux
echo '$(image_cmdline)' > $(obj)/isoimage/isolinux.cfg echo '$(image_cmdline)' > $(obj)/isoimage/isolinux.cfg
if [ -f '$(FDINITRD)' ] ; then \ if [ -f '$(FDINITRD)' ] ; then \
cp '$(FDINITRD)' $(obj)/isoimage/initrd.img ; \ cp '$(FDINITRD)' $(obj)/isoimage/initrd.img ; \
...@@ -181,12 +177,13 @@ isoimage: $(BOOTIMAGE) ...@@ -181,12 +177,13 @@ isoimage: $(BOOTIMAGE)
isohybrid $(obj)/image.iso 2>/dev/null || true isohybrid $(obj)/image.iso 2>/dev/null || true
rm -rf $(obj)/isoimage rm -rf $(obj)/isoimage
zlilo: $(BOOTIMAGE) bzlilo: $(obj)/bzImage
if [ -f $(INSTALL_PATH)/vmlinuz ]; then mv $(INSTALL_PATH)/vmlinuz $(INSTALL_PATH)/vmlinuz.old; fi if [ -f $(INSTALL_PATH)/vmlinuz ]; then mv $(INSTALL_PATH)/vmlinuz $(INSTALL_PATH)/vmlinuz.old; fi
if [ -f $(INSTALL_PATH)/System.map ]; then mv $(INSTALL_PATH)/System.map $(INSTALL_PATH)/System.old; fi if [ -f $(INSTALL_PATH)/System.map ]; then mv $(INSTALL_PATH)/System.map $(INSTALL_PATH)/System.old; fi
cat $(BOOTIMAGE) > $(INSTALL_PATH)/vmlinuz cat $(obj)/bzImage > $(INSTALL_PATH)/vmlinuz
cp System.map $(INSTALL_PATH)/ cp System.map $(INSTALL_PATH)/
if [ -x /sbin/lilo ]; then /sbin/lilo; else /etc/lilo/install; fi if [ -x /sbin/lilo ]; then /sbin/lilo; else /etc/lilo/install; fi
install: install:
sh $(srctree)/$(src)/install.sh $(KERNELRELEASE) $(BOOTIMAGE) System.map "$(INSTALL_PATH)" sh $(srctree)/$(src)/install.sh $(KERNELRELEASE) $(obj)/bzImage \
System.map "$(INSTALL_PATH)"
...@@ -24,12 +24,8 @@ ...@@ -24,12 +24,8 @@
#include "boot.h" #include "boot.h"
#include "offsets.h" #include "offsets.h"
SETUPSECTS = 4 /* default nr of setup-sectors */
BOOTSEG = 0x07C0 /* original address of boot-sector */ BOOTSEG = 0x07C0 /* original address of boot-sector */
SYSSEG = DEF_SYSSEG /* system loaded at 0x10000 (65536) */ SYSSEG = 0x1000 /* historical load address >> 4 */
SYSSIZE = DEF_SYSSIZE /* system size: # of 16-byte clicks */
/* to be loaded */
ROOT_DEV = 0 /* ROOT_DEV is now written by "build" */
#ifndef SVGA_MODE #ifndef SVGA_MODE
#define SVGA_MODE ASK_VGA #define SVGA_MODE ASK_VGA
...@@ -97,12 +93,12 @@ bugger_off_msg: ...@@ -97,12 +93,12 @@ bugger_off_msg:
.section ".header", "a" .section ".header", "a"
.globl hdr .globl hdr
hdr: hdr:
setup_sects: .byte SETUPSECTS setup_sects: .byte 0 /* Filled in by build.c */
root_flags: .word ROOT_RDONLY root_flags: .word ROOT_RDONLY
syssize: .long SYSSIZE syssize: .long 0 /* Filled in by build.c */
ram_size: .word RAMDISK ram_size: .word 0 /* Obsolete */
vid_mode: .word SVGA_MODE vid_mode: .word SVGA_MODE
root_dev: .word ROOT_DEV root_dev: .word 0 /* Filled in by build.c */
boot_flag: .word 0xAA55 boot_flag: .word 0xAA55
# offset 512, entry point # offset 512, entry point
...@@ -123,14 +119,15 @@ _start: ...@@ -123,14 +119,15 @@ _start:
# or else old loadlin-1.5 will fail) # or else old loadlin-1.5 will fail)
.globl realmode_swtch .globl realmode_swtch
realmode_swtch: .word 0, 0 # default_switch, SETUPSEG realmode_swtch: .word 0, 0 # default_switch, SETUPSEG
start_sys_seg: .word SYSSEG start_sys_seg: .word SYSSEG # obsolete and meaningless, but just
# in case something decided to "use" it
.word kernel_version-512 # pointing to kernel version string .word kernel_version-512 # pointing to kernel version string
# above section of header is compatible # above section of header is compatible
# with loadlin-1.5 (header v1.5). Don't # with loadlin-1.5 (header v1.5). Don't
# change it. # change it.
type_of_loader: .byte 0 # = 0, old one (LILO, Loadlin, type_of_loader: .byte 0 # 0 means ancient bootloader, newer
# Bootlin, SYSLX, bootsect...) # bootloaders know to change this.
# See Documentation/i386/boot.txt for # See Documentation/i386/boot.txt for
# assigned ids # assigned ids
...@@ -142,11 +139,7 @@ CAN_USE_HEAP = 0x80 # If set, the loader also has set ...@@ -142,11 +139,7 @@ CAN_USE_HEAP = 0x80 # If set, the loader also has set
# space behind setup.S can be used for # space behind setup.S can be used for
# heap purposes. # heap purposes.
# Only the loader knows what is free # Only the loader knows what is free
#ifndef __BIG_KERNEL__
.byte 0
#else
.byte LOADED_HIGH .byte LOADED_HIGH
#endif
setup_move_size: .word 0x8000 # size to move, when setup is not setup_move_size: .word 0x8000 # size to move, when setup is not
# loaded at 0x90000. We will move setup # loaded at 0x90000. We will move setup
...@@ -157,11 +150,7 @@ setup_move_size: .word 0x8000 # size to move, when setup is not ...@@ -157,11 +150,7 @@ setup_move_size: .word 0x8000 # size to move, when setup is not
code32_start: # here loaders can put a different code32_start: # here loaders can put a different
# start address for 32-bit code. # start address for 32-bit code.
#ifndef __BIG_KERNEL__
.long 0x1000 # 0x1000 = default for zImage
#else
.long 0x100000 # 0x100000 = default for big kernel .long 0x100000 # 0x100000 = default for big kernel
#endif
ramdisk_image: .long 0 # address of loaded ramdisk image ramdisk_image: .long 0 # address of loaded ramdisk image
# Here the loader puts the 32-bit # Here the loader puts the 32-bit
......
...@@ -32,47 +32,6 @@ static void realmode_switch_hook(void) ...@@ -32,47 +32,6 @@ static void realmode_switch_hook(void)
} }
} }
/*
* A zImage kernel is loaded at 0x10000 but wants to run at 0x1000.
* A bzImage kernel is loaded and runs at 0x100000.
*/
static void move_kernel_around(void)
{
/* Note: rely on the compile-time option here rather than
the LOADED_HIGH flag. The Qemu kernel loader unconditionally
sets the loadflags to zero. */
#ifndef __BIG_KERNEL__
u16 dst_seg, src_seg;
u32 syssize;
dst_seg = 0x1000 >> 4;
src_seg = 0x10000 >> 4;
syssize = boot_params.hdr.syssize; /* Size in 16-byte paragraphs */
while (syssize) {
int paras = (syssize >= 0x1000) ? 0x1000 : syssize;
int dwords = paras << 2;
asm volatile("pushw %%es ; "
"pushw %%ds ; "
"movw %1,%%es ; "
"movw %2,%%ds ; "
"xorw %%di,%%di ; "
"xorw %%si,%%si ; "
"rep;movsl ; "
"popw %%ds ; "
"popw %%es"
: "+c" (dwords)
: "r" (dst_seg), "r" (src_seg)
: "esi", "edi");
syssize -= paras;
dst_seg += paras;
src_seg += paras;
}
#endif
}
/* /*
* Disable all interrupts at the legacy PIC. * Disable all interrupts at the legacy PIC.
*/ */
...@@ -147,9 +106,6 @@ void go_to_protected_mode(void) ...@@ -147,9 +106,6 @@ void go_to_protected_mode(void)
/* Hook before leaving real mode, also disables interrupts */ /* Hook before leaving real mode, also disables interrupts */
realmode_switch_hook(); realmode_switch_hook();
/* Move the kernel/setup to their final resting places */
move_kernel_around();
/* Enable the A20 gate */ /* Enable the A20 gate */
if (enable_a20()) { if (enable_a20()) {
puts("A20 gate not responding, unable to boot...\n"); puts("A20 gate not responding, unable to boot...\n");
......
...@@ -47,6 +47,7 @@ GLOBAL(protected_mode_jump) ...@@ -47,6 +47,7 @@ GLOBAL(protected_mode_jump)
ENDPROC(protected_mode_jump) ENDPROC(protected_mode_jump)
.code32 .code32
.section ".text32","ax"
GLOBAL(in_pm32) GLOBAL(in_pm32)
# Set up data segments for flat 32-bit mode # Set up data segments for flat 32-bit mode
movl %ecx, %ds movl %ecx, %ds
......
...@@ -17,7 +17,8 @@ SECTIONS ...@@ -17,7 +17,8 @@ SECTIONS
.header : { *(.header) } .header : { *(.header) }
.inittext : { *(.inittext) } .inittext : { *(.inittext) }
.initdata : { *(.initdata) } .initdata : { *(.initdata) }
.text : { *(.text*) } .text : { *(.text) }
.text32 : { *(.text32) }
. = ALIGN(16); . = ALIGN(16);
.rodata : { *(.rodata*) } .rodata : { *(.rodata*) }
......
...@@ -130,7 +130,7 @@ static void die(const char * str, ...) ...@@ -130,7 +130,7 @@ static void die(const char * str, ...)
static void usage(void) static void usage(void)
{ {
die("Usage: build [-b] setup system [rootdev] [> image]"); die("Usage: build setup system [rootdev] [> image]");
} }
int main(int argc, char ** argv) int main(int argc, char ** argv)
...@@ -145,11 +145,6 @@ int main(int argc, char ** argv) ...@@ -145,11 +145,6 @@ int main(int argc, char ** argv)
void *kernel; void *kernel;
u32 crc = 0xffffffffUL; u32 crc = 0xffffffffUL;
if (argc > 2 && !strcmp(argv[1], "-b"))
{
is_big_kernel = 1;
argc--, argv++;
}
if ((argc < 3) || (argc > 4)) if ((argc < 3) || (argc > 4))
usage(); usage();
if (argc > 3) { if (argc > 3) {
...@@ -216,8 +211,6 @@ int main(int argc, char ** argv) ...@@ -216,8 +211,6 @@ int main(int argc, char ** argv)
die("Unable to mmap '%s': %m", argv[2]); die("Unable to mmap '%s': %m", argv[2]);
/* Number of 16-byte paragraphs, including space for a 4-byte CRC */ /* Number of 16-byte paragraphs, including space for a 4-byte CRC */
sys_size = (sz + 15 + 4) / 16; sys_size = (sz + 15 + 4) / 16;
if (!is_big_kernel && sys_size > DEF_SYSSIZE)
die("System is too big. Try using bzImage or modules.");
/* Patch the setup code with the appropriate size parameters */ /* Patch the setup code with the appropriate size parameters */
buf[0x1f1] = setup_sectors-1; buf[0x1f1] = setup_sectors-1;
......
...@@ -129,16 +129,20 @@ u16 vga_crtc(void) ...@@ -129,16 +129,20 @@ u16 vga_crtc(void)
return (inb(0x3cc) & 1) ? 0x3d4 : 0x3b4; return (inb(0x3cc) & 1) ? 0x3d4 : 0x3b4;
} }
static void vga_set_480_scanlines(int end) static void vga_set_480_scanlines(int lines)
{ {
u16 crtc; u16 crtc; /* CRTC base address */
u8 csel; u8 csel; /* CRTC miscellaneous output register */
u8 ovfw; /* CRTC overflow register */
int end = lines-1;
crtc = vga_crtc(); crtc = vga_crtc();
ovfw = 0x3c | ((end >> (8-1)) & 0x02) | ((end >> (9-6)) & 0x40);
out_idx(0x0c, crtc, 0x11); /* Vertical sync end, unlock CR0-7 */ out_idx(0x0c, crtc, 0x11); /* Vertical sync end, unlock CR0-7 */
out_idx(0x0b, crtc, 0x06); /* Vertical total */ out_idx(0x0b, crtc, 0x06); /* Vertical total */
out_idx(0x3e, crtc, 0x07); /* Vertical overflow */ out_idx(ovfw, crtc, 0x07); /* Vertical overflow */
out_idx(0xea, crtc, 0x10); /* Vertical sync start */ out_idx(0xea, crtc, 0x10); /* Vertical sync start */
out_idx(end, crtc, 0x12); /* Vertical display end */ out_idx(end, crtc, 0x12); /* Vertical display end */
out_idx(0xe7, crtc, 0x15); /* Vertical blank start */ out_idx(0xe7, crtc, 0x15); /* Vertical blank start */
...@@ -146,24 +150,24 @@ static void vga_set_480_scanlines(int end) ...@@ -146,24 +150,24 @@ static void vga_set_480_scanlines(int end)
csel = inb(0x3cc); csel = inb(0x3cc);
csel &= 0x0d; csel &= 0x0d;
csel |= 0xe2; csel |= 0xe2;
outb(csel, 0x3cc); outb(csel, 0x3c2);
} }
static void vga_set_80x30(void) static void vga_set_80x30(void)
{ {
vga_set_480_scanlines(0xdf); vga_set_480_scanlines(30*16);
} }
static void vga_set_80x34(void) static void vga_set_80x34(void)
{ {
vga_set_14font(); vga_set_14font();
vga_set_480_scanlines(0xdb); vga_set_480_scanlines(34*14);
} }
static void vga_set_80x60(void) static void vga_set_80x60(void)
{ {
vga_set_8font(); vga_set_8font();
vga_set_480_scanlines(0xdf); vga_set_480_scanlines(60*8);
} }
static int vga_set_mode(struct mode_info *mode) static int vga_set_mode(struct mode_info *mode)
......
...@@ -75,7 +75,7 @@ static inline void default_inquire_remote_apic(int apicid) ...@@ -75,7 +75,7 @@ static inline void default_inquire_remote_apic(int apicid)
#define setup_secondary_clock setup_secondary_APIC_clock #define setup_secondary_clock setup_secondary_APIC_clock
#endif #endif
#ifdef CONFIG_X86_VSMP #ifdef CONFIG_X86_64
extern int is_vsmp_box(void); extern int is_vsmp_box(void);
#else #else
static inline int is_vsmp_box(void) static inline int is_vsmp_box(void)
...@@ -108,6 +108,16 @@ extern void native_apic_icr_write(u32 low, u32 id); ...@@ -108,6 +108,16 @@ extern void native_apic_icr_write(u32 low, u32 id);
extern u64 native_apic_icr_read(void); extern u64 native_apic_icr_read(void);
#ifdef CONFIG_X86_X2APIC #ifdef CONFIG_X86_X2APIC
/*
* Make previous memory operations globally visible before
* sending the IPI through x2apic wrmsr. We need a serializing instruction or
* mfence for this.
*/
static inline void x2apic_wrmsr_fence(void)
{
asm volatile("mfence" : : : "memory");
}
static inline void native_apic_msr_write(u32 reg, u32 v) static inline void native_apic_msr_write(u32 reg, u32 v)
{ {
if (reg == APIC_DFR || reg == APIC_ID || reg == APIC_LDR || if (reg == APIC_DFR || reg == APIC_ID || reg == APIC_LDR ||
...@@ -184,6 +194,9 @@ static inline int x2apic_enabled(void) ...@@ -184,6 +194,9 @@ static inline int x2apic_enabled(void)
{ {
return 0; return 0;
} }
#define x2apic 0
#endif #endif
extern int get_physical_broadcast(void); extern int get_physical_broadcast(void);
...@@ -379,6 +392,7 @@ static inline u32 safe_apic_wait_icr_idle(void) ...@@ -379,6 +392,7 @@ static inline u32 safe_apic_wait_icr_idle(void)
static inline void ack_APIC_irq(void) static inline void ack_APIC_irq(void)
{ {
#ifdef CONFIG_X86_LOCAL_APIC
/* /*
* ack_APIC_irq() actually gets compiled as a single instruction * ack_APIC_irq() actually gets compiled as a single instruction
* ... yummie. * ... yummie.
...@@ -386,6 +400,7 @@ static inline void ack_APIC_irq(void) ...@@ -386,6 +400,7 @@ static inline void ack_APIC_irq(void)
/* Docs say use 0 for future compatibility */ /* Docs say use 0 for future compatibility */
apic_write(APIC_EOI, 0); apic_write(APIC_EOI, 0);
#endif
} }
static inline unsigned default_get_apic_id(unsigned long x) static inline unsigned default_get_apic_id(unsigned long x)
...@@ -474,10 +489,19 @@ static inline int default_apic_id_registered(void) ...@@ -474,10 +489,19 @@ static inline int default_apic_id_registered(void)
return physid_isset(read_apic_id(), phys_cpu_present_map); return physid_isset(read_apic_id(), phys_cpu_present_map);
} }
static inline int default_phys_pkg_id(int cpuid_apic, int index_msb)
{
return cpuid_apic >> index_msb;
}
extern int default_apicid_to_node(int logical_apicid);
#endif
static inline unsigned int static inline unsigned int
default_cpu_mask_to_apicid(const struct cpumask *cpumask) default_cpu_mask_to_apicid(const struct cpumask *cpumask)
{ {
return cpumask_bits(cpumask)[0]; return cpumask_bits(cpumask)[0] & APIC_ALL_CPUS;
} }
static inline unsigned int static inline unsigned int
...@@ -491,15 +515,6 @@ default_cpu_mask_to_apicid_and(const struct cpumask *cpumask, ...@@ -491,15 +515,6 @@ default_cpu_mask_to_apicid_and(const struct cpumask *cpumask,
return (unsigned int)(mask1 & mask2 & mask3); return (unsigned int)(mask1 & mask2 & mask3);
} }
static inline int default_phys_pkg_id(int cpuid_apic, int index_msb)
{
return cpuid_apic >> index_msb;
}
extern int default_apicid_to_node(int logical_apicid);
#endif
static inline unsigned long default_check_apicid_used(physid_mask_t bitmap, int apicid) static inline unsigned long default_check_apicid_used(physid_mask_t bitmap, int apicid)
{ {
return physid_isset(apicid, bitmap); return physid_isset(apicid, bitmap);
......
...@@ -53,6 +53,7 @@ ...@@ -53,6 +53,7 @@
#define APIC_ESR_SENDILL 0x00020 #define APIC_ESR_SENDILL 0x00020
#define APIC_ESR_RECVILL 0x00040 #define APIC_ESR_RECVILL 0x00040
#define APIC_ESR_ILLREGA 0x00080 #define APIC_ESR_ILLREGA 0x00080
#define APIC_LVTCMCI 0x2f0
#define APIC_ICR 0x300 #define APIC_ICR 0x300
#define APIC_DEST_SELF 0x40000 #define APIC_DEST_SELF 0x40000
#define APIC_DEST_ALLINC 0x80000 #define APIC_DEST_ALLINC 0x80000
......
#ifndef _ASM_X86_BOOT_H #ifndef _ASM_X86_BOOT_H
#define _ASM_X86_BOOT_H #define _ASM_X86_BOOT_H
/* Don't touch these, unless you really know what you're doing. */
#define DEF_SYSSEG 0x1000
#define DEF_SYSSIZE 0x7F00
/* Internal svga startup constants */ /* Internal svga startup constants */
#define NORMAL_VGA 0xffff /* 80x25 mode */ #define NORMAL_VGA 0xffff /* 80x25 mode */
#define EXTENDED_VGA 0xfffe /* 80x50 mode */ #define EXTENDED_VGA 0xfffe /* 80x50 mode */
......
...@@ -90,6 +90,9 @@ int set_memory_4k(unsigned long addr, int numpages); ...@@ -90,6 +90,9 @@ int set_memory_4k(unsigned long addr, int numpages);
int set_memory_array_uc(unsigned long *addr, int addrinarray); int set_memory_array_uc(unsigned long *addr, int addrinarray);
int set_memory_array_wb(unsigned long *addr, int addrinarray); int set_memory_array_wb(unsigned long *addr, int addrinarray);
int set_pages_array_uc(struct page **pages, int addrinarray);
int set_pages_array_wb(struct page **pages, int addrinarray);
/* /*
* For legacy compatibility with the old APIs, a few functions * For legacy compatibility with the old APIs, a few functions
* are provided that work on a "struct page". * are provided that work on a "struct page".
......
#ifndef _ASM_X86_CPU_DEBUG_H
#define _ASM_X86_CPU_DEBUG_H
/*
* CPU x86 architecture debug
*
* Copyright(C) 2009 Jaswinder Singh Rajput
*/
/* Register flags */
enum cpu_debug_bit {
/* Model Specific Registers (MSRs) */
CPU_MC_BIT, /* Machine Check */
CPU_MONITOR_BIT, /* Monitor */
CPU_TIME_BIT, /* Time */
CPU_PMC_BIT, /* Performance Monitor */
CPU_PLATFORM_BIT, /* Platform */
CPU_APIC_BIT, /* APIC */
CPU_POWERON_BIT, /* Power-on */
CPU_CONTROL_BIT, /* Control */
CPU_FEATURES_BIT, /* Features control */
CPU_LBRANCH_BIT, /* Last Branch */
CPU_BIOS_BIT, /* BIOS */
CPU_FREQ_BIT, /* Frequency */
CPU_MTTR_BIT, /* MTRR */
CPU_PERF_BIT, /* Performance */
CPU_CACHE_BIT, /* Cache */
CPU_SYSENTER_BIT, /* Sysenter */
CPU_THERM_BIT, /* Thermal */
CPU_MISC_BIT, /* Miscellaneous */
CPU_DEBUG_BIT, /* Debug */
CPU_PAT_BIT, /* PAT */
CPU_VMX_BIT, /* VMX */
CPU_CALL_BIT, /* System Call */
CPU_BASE_BIT, /* BASE Address */
CPU_VER_BIT, /* Version ID */
CPU_CONF_BIT, /* Configuration */
CPU_SMM_BIT, /* System mgmt mode */
CPU_SVM_BIT, /*Secure Virtual Machine*/
CPU_OSVM_BIT, /* OS-Visible Workaround*/
/* Standard Registers */
CPU_TSS_BIT, /* Task Stack Segment */
CPU_CR_BIT, /* Control Registers */
CPU_DT_BIT, /* Descriptor Table */
/* End of Registers flags */
CPU_REG_ALL_BIT, /* Select all Registers */
};
#define CPU_REG_ALL (~0) /* Select all Registers */
#define CPU_MC (1 << CPU_MC_BIT)
#define CPU_MONITOR (1 << CPU_MONITOR_BIT)
#define CPU_TIME (1 << CPU_TIME_BIT)
#define CPU_PMC (1 << CPU_PMC_BIT)
#define CPU_PLATFORM (1 << CPU_PLATFORM_BIT)
#define CPU_APIC (1 << CPU_APIC_BIT)
#define CPU_POWERON (1 << CPU_POWERON_BIT)
#define CPU_CONTROL (1 << CPU_CONTROL_BIT)
#define CPU_FEATURES (1 << CPU_FEATURES_BIT)
#define CPU_LBRANCH (1 << CPU_LBRANCH_BIT)
#define CPU_BIOS (1 << CPU_BIOS_BIT)
#define CPU_FREQ (1 << CPU_FREQ_BIT)
#define CPU_MTRR (1 << CPU_MTTR_BIT)
#define CPU_PERF (1 << CPU_PERF_BIT)
#define CPU_CACHE (1 << CPU_CACHE_BIT)
#define CPU_SYSENTER (1 << CPU_SYSENTER_BIT)
#define CPU_THERM (1 << CPU_THERM_BIT)
#define CPU_MISC (1 << CPU_MISC_BIT)
#define CPU_DEBUG (1 << CPU_DEBUG_BIT)
#define CPU_PAT (1 << CPU_PAT_BIT)
#define CPU_VMX (1 << CPU_VMX_BIT)
#define CPU_CALL (1 << CPU_CALL_BIT)
#define CPU_BASE (1 << CPU_BASE_BIT)
#define CPU_VER (1 << CPU_VER_BIT)
#define CPU_CONF (1 << CPU_CONF_BIT)
#define CPU_SMM (1 << CPU_SMM_BIT)
#define CPU_SVM (1 << CPU_SVM_BIT)
#define CPU_OSVM (1 << CPU_OSVM_BIT)
#define CPU_TSS (1 << CPU_TSS_BIT)
#define CPU_CR (1 << CPU_CR_BIT)
#define CPU_DT (1 << CPU_DT_BIT)
/* Register file flags */
enum cpu_file_bit {
CPU_INDEX_BIT, /* index */
CPU_VALUE_BIT, /* value */
};
#define CPU_FILE_VALUE (1 << CPU_VALUE_BIT)
/*
* DisplayFamily_DisplayModel Processor Families/Processor Number Series
* -------------------------- ------------------------------------------
* 05_01, 05_02, 05_04 Pentium, Pentium with MMX
*
* 06_01 Pentium Pro
* 06_03, 06_05 Pentium II Xeon, Pentium II
* 06_07, 06_08, 06_0A, 06_0B Pentium III Xeon, Pentum III
*
* 06_09, 060D Pentium M
*
* 06_0E Core Duo, Core Solo
*
* 06_0F Xeon 3000, 3200, 5100, 5300, 7300 series,
* Core 2 Quad, Core 2 Extreme, Core 2 Duo,
* Pentium dual-core
* 06_17 Xeon 5200, 5400 series, Core 2 Quad Q9650
*
* 06_1C Atom
*
* 0F_00, 0F_01, 0F_02 Xeon, Xeon MP, Pentium 4
* 0F_03, 0F_04 Xeon, Xeon MP, Pentium 4, Pentium D
*
* 0F_06 Xeon 7100, 5000 Series, Xeon MP,
* Pentium 4, Pentium D
*/
/* Register processors bits */
enum cpu_processor_bit {
CPU_NONE,
/* Intel */
CPU_INTEL_PENTIUM_BIT,
CPU_INTEL_P6_BIT,
CPU_INTEL_PENTIUM_M_BIT,
CPU_INTEL_CORE_BIT,
CPU_INTEL_CORE2_BIT,
CPU_INTEL_ATOM_BIT,
CPU_INTEL_XEON_P4_BIT,
CPU_INTEL_XEON_MP_BIT,
/* AMD */
CPU_AMD_K6_BIT,
CPU_AMD_K7_BIT,
CPU_AMD_K8_BIT,
CPU_AMD_0F_BIT,
CPU_AMD_10_BIT,
CPU_AMD_11_BIT,
};
#define CPU_INTEL_PENTIUM (1 << CPU_INTEL_PENTIUM_BIT)
#define CPU_INTEL_P6 (1 << CPU_INTEL_P6_BIT)
#define CPU_INTEL_PENTIUM_M (1 << CPU_INTEL_PENTIUM_M_BIT)
#define CPU_INTEL_CORE (1 << CPU_INTEL_CORE_BIT)
#define CPU_INTEL_CORE2 (1 << CPU_INTEL_CORE2_BIT)
#define CPU_INTEL_ATOM (1 << CPU_INTEL_ATOM_BIT)
#define CPU_INTEL_XEON_P4 (1 << CPU_INTEL_XEON_P4_BIT)
#define CPU_INTEL_XEON_MP (1 << CPU_INTEL_XEON_MP_BIT)
#define CPU_INTEL_PX (CPU_INTEL_P6 | CPU_INTEL_PENTIUM_M)
#define CPU_INTEL_COREX (CPU_INTEL_CORE | CPU_INTEL_CORE2)
#define CPU_INTEL_XEON (CPU_INTEL_XEON_P4 | CPU_INTEL_XEON_MP)
#define CPU_CO_AT (CPU_INTEL_CORE | CPU_INTEL_ATOM)
#define CPU_C2_AT (CPU_INTEL_CORE2 | CPU_INTEL_ATOM)
#define CPU_CX_AT (CPU_INTEL_COREX | CPU_INTEL_ATOM)
#define CPU_CX_XE (CPU_INTEL_COREX | CPU_INTEL_XEON)
#define CPU_P6_XE (CPU_INTEL_P6 | CPU_INTEL_XEON)
#define CPU_PM_CO_AT (CPU_INTEL_PENTIUM_M | CPU_CO_AT)
#define CPU_C2_AT_XE (CPU_C2_AT | CPU_INTEL_XEON)
#define CPU_CX_AT_XE (CPU_CX_AT | CPU_INTEL_XEON)
#define CPU_P6_CX_AT (CPU_INTEL_P6 | CPU_CX_AT)
#define CPU_P6_CX_XE (CPU_P6_XE | CPU_INTEL_COREX)
#define CPU_P6_CX_AT_XE (CPU_INTEL_P6 | CPU_CX_AT_XE)
#define CPU_PM_CX_AT_XE (CPU_INTEL_PENTIUM_M | CPU_CX_AT_XE)
#define CPU_PM_CX_AT (CPU_INTEL_PENTIUM_M | CPU_CX_AT)
#define CPU_PM_CX_XE (CPU_INTEL_PENTIUM_M | CPU_CX_XE)
#define CPU_PX_CX_AT (CPU_INTEL_PX | CPU_CX_AT)
#define CPU_PX_CX_AT_XE (CPU_INTEL_PX | CPU_CX_AT_XE)
/* Select all supported Intel CPUs */
#define CPU_INTEL_ALL (CPU_INTEL_PENTIUM | CPU_PX_CX_AT_XE)
#define CPU_AMD_K6 (1 << CPU_AMD_K6_BIT)
#define CPU_AMD_K7 (1 << CPU_AMD_K7_BIT)
#define CPU_AMD_K8 (1 << CPU_AMD_K8_BIT)
#define CPU_AMD_0F (1 << CPU_AMD_0F_BIT)
#define CPU_AMD_10 (1 << CPU_AMD_10_BIT)
#define CPU_AMD_11 (1 << CPU_AMD_11_BIT)
#define CPU_K10_PLUS (CPU_AMD_10 | CPU_AMD_11)
#define CPU_K0F_PLUS (CPU_AMD_0F | CPU_K10_PLUS)
#define CPU_K8_PLUS (CPU_AMD_K8 | CPU_K0F_PLUS)
#define CPU_K7_PLUS (CPU_AMD_K7 | CPU_K8_PLUS)
/* Select all supported AMD CPUs */
#define CPU_AMD_ALL (CPU_AMD_K6 | CPU_K7_PLUS)
/* Select all supported CPUs */
#define CPU_ALL (CPU_INTEL_ALL | CPU_AMD_ALL)
#define MAX_CPU_FILES 512
struct cpu_private {
unsigned cpu;
unsigned type;
unsigned reg;
unsigned file;
};
struct cpu_debug_base {
char *name; /* Register name */
unsigned flag; /* Register flag */
unsigned write; /* Register write flag */
};
/*
* Currently it looks similar to cpu_debug_base but once we add more files
* cpu_file_base will go in different direction
*/
struct cpu_file_base {
char *name; /* Register file name */
unsigned flag; /* Register file flag */
unsigned write; /* Register write flag */
};
struct cpu_cpuX_base {
struct dentry *dentry; /* Register dentry */
int init; /* Register index file */
};
struct cpu_debug_range {
unsigned min; /* Register range min */
unsigned max; /* Register range max */
unsigned flag; /* Supported flags */
unsigned model; /* Supported models */
};
#endif /* _ASM_X86_CPU_DEBUG_H */
...@@ -91,7 +91,6 @@ static inline int desc_empty(const void *ptr) ...@@ -91,7 +91,6 @@ static inline int desc_empty(const void *ptr)
#define store_gdt(dtr) native_store_gdt(dtr) #define store_gdt(dtr) native_store_gdt(dtr)
#define store_idt(dtr) native_store_idt(dtr) #define store_idt(dtr) native_store_idt(dtr)
#define store_tr(tr) (tr = native_store_tr()) #define store_tr(tr) (tr = native_store_tr())
#define store_ldt(ldt) asm("sldt %0":"=m" (ldt))
#define load_TLS(t, cpu) native_load_tls(t, cpu) #define load_TLS(t, cpu) native_load_tls(t, cpu)
#define set_ldt native_set_ldt #define set_ldt native_set_ldt
...@@ -112,6 +111,8 @@ static inline void paravirt_free_ldt(struct desc_struct *ldt, unsigned entries) ...@@ -112,6 +111,8 @@ static inline void paravirt_free_ldt(struct desc_struct *ldt, unsigned entries)
} }
#endif /* CONFIG_PARAVIRT */ #endif /* CONFIG_PARAVIRT */
#define store_ldt(ldt) asm("sldt %0" : "=m"(ldt))
static inline void native_write_idt_entry(gate_desc *idt, int entry, static inline void native_write_idt_entry(gate_desc *idt, int entry,
const gate_desc *gate) const gate_desc *gate)
{ {
......
#ifndef _ASM_X86_DMI_H #ifndef _ASM_X86_DMI_H
#define _ASM_X86_DMI_H #define _ASM_X86_DMI_H
#include <asm/io.h> #include <linux/compiler.h>
#include <linux/init.h>
#define DMI_MAX_DATA 2048
extern int dmi_alloc_index; #include <asm/io.h>
extern char dmi_alloc_data[DMI_MAX_DATA]; #include <asm/setup.h>
/* This is so early that there is no good way to allocate dynamic memory. static __always_inline __init void *dmi_alloc(unsigned len)
Allocate data in an BSS array. */
static inline void *dmi_alloc(unsigned len)
{ {
int idx = dmi_alloc_index; return extend_brk(len, sizeof(int));
if ((dmi_alloc_index + len) > DMI_MAX_DATA)
return NULL;
dmi_alloc_index += len;
return dmi_alloc_data + idx;
} }
/* Use early IO mappings for DMI because it's initialized early */ /* Use early IO mappings for DMI because it's initialized early */
......
...@@ -72,7 +72,7 @@ extern int e820_all_mapped(u64 start, u64 end, unsigned type); ...@@ -72,7 +72,7 @@ extern int e820_all_mapped(u64 start, u64 end, unsigned type);
extern void e820_add_region(u64 start, u64 size, int type); extern void e820_add_region(u64 start, u64 size, int type);
extern void e820_print_map(char *who); extern void e820_print_map(char *who);
extern int extern int
sanitize_e820_map(struct e820entry *biosmap, int max_nr_map, int *pnr_map); sanitize_e820_map(struct e820entry *biosmap, int max_nr_map, u32 *pnr_map);
extern u64 e820_update_range(u64 start, u64 size, unsigned old_type, extern u64 e820_update_range(u64 start, u64 size, unsigned old_type,
unsigned new_type); unsigned new_type);
extern u64 e820_remove_range(u64 start, u64 size, unsigned old_type, extern u64 e820_remove_range(u64 start, u64 size, unsigned old_type,
......
...@@ -33,6 +33,8 @@ BUILD_INTERRUPT3(invalidate_interrupt7,INVALIDATE_TLB_VECTOR_START+7, ...@@ -33,6 +33,8 @@ BUILD_INTERRUPT3(invalidate_interrupt7,INVALIDATE_TLB_VECTOR_START+7,
smp_invalidate_interrupt) smp_invalidate_interrupt)
#endif #endif
BUILD_INTERRUPT(generic_interrupt, GENERIC_INTERRUPT_VECTOR)
/* /*
* every pentium local APIC has two 'local interrupts', with a * every pentium local APIC has two 'local interrupts', with a
* soft-definable vector attached to both interrupts, one of * soft-definable vector attached to both interrupts, one of
......
...@@ -12,6 +12,7 @@ typedef struct { ...@@ -12,6 +12,7 @@ typedef struct {
unsigned int apic_timer_irqs; /* arch dependent */ unsigned int apic_timer_irqs; /* arch dependent */
unsigned int irq_spurious_count; unsigned int irq_spurious_count;
#endif #endif
unsigned int generic_irqs; /* arch dependent */
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
unsigned int irq_resched_count; unsigned int irq_resched_count;
unsigned int irq_call_count; unsigned int irq_call_count;
......
...@@ -63,6 +63,7 @@ void *kmap_atomic_prot(struct page *page, enum km_type type, pgprot_t prot); ...@@ -63,6 +63,7 @@ void *kmap_atomic_prot(struct page *page, enum km_type type, pgprot_t prot);
void *kmap_atomic(struct page *page, enum km_type type); void *kmap_atomic(struct page *page, enum km_type type);
void kunmap_atomic(void *kvaddr, enum km_type type); void kunmap_atomic(void *kvaddr, enum km_type type);
void *kmap_atomic_pfn(unsigned long pfn, enum km_type type); void *kmap_atomic_pfn(unsigned long pfn, enum km_type type);
void *kmap_atomic_prot_pfn(unsigned long pfn, enum km_type type, pgprot_t prot);
struct page *kmap_atomic_to_page(void *ptr); struct page *kmap_atomic_to_page(void *ptr);
#ifndef CONFIG_PARAVIRT #ifndef CONFIG_PARAVIRT
......
...@@ -27,6 +27,7 @@ ...@@ -27,6 +27,7 @@
/* Interrupt handlers registered during init_IRQ */ /* Interrupt handlers registered during init_IRQ */
extern void apic_timer_interrupt(void); extern void apic_timer_interrupt(void);
extern void generic_interrupt(void);
extern void error_interrupt(void); extern void error_interrupt(void);
extern void spurious_interrupt(void); extern void spurious_interrupt(void);
extern void thermal_interrupt(void); extern void thermal_interrupt(void);
......
#ifndef _ASM_X86_INIT_32_H
#define _ASM_X86_INIT_32_H
#ifdef CONFIG_X86_32
extern void __init early_ioremap_page_table_range_init(void);
#endif
extern unsigned long __init
kernel_physical_mapping_init(unsigned long start,
unsigned long end,
unsigned long page_size_mask);
extern unsigned long __initdata e820_table_start;
extern unsigned long __meminitdata e820_table_end;
extern unsigned long __meminitdata e820_table_top;
#endif /* _ASM_X86_INIT_32_H */
...@@ -162,7 +162,8 @@ extern int (*ioapic_renumber_irq)(int ioapic, int irq); ...@@ -162,7 +162,8 @@ extern int (*ioapic_renumber_irq)(int ioapic, int irq);
extern void ioapic_init_mappings(void); extern void ioapic_init_mappings(void);
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
extern int save_mask_IO_APIC_setup(void); extern int save_IO_APIC_setup(void);
extern void mask_IO_APIC_setup(void);
extern void restore_IO_APIC_setup(void); extern void restore_IO_APIC_setup(void);
extern void reinit_intr_remapped_IO_APIC(int); extern void reinit_intr_remapped_IO_APIC(int);
#endif #endif
...@@ -172,7 +173,7 @@ extern void probe_nr_irqs_gsi(void); ...@@ -172,7 +173,7 @@ extern void probe_nr_irqs_gsi(void);
extern int setup_ioapic_entry(int apic, int irq, extern int setup_ioapic_entry(int apic, int irq,
struct IO_APIC_route_entry *entry, struct IO_APIC_route_entry *entry,
unsigned int destination, int trigger, unsigned int destination, int trigger,
int polarity, int vector); int polarity, int vector, int pin);
extern void ioapic_write_entry(int apic, int pin, extern void ioapic_write_entry(int apic, int pin,
struct IO_APIC_route_entry e); struct IO_APIC_route_entry e);
#else /* !CONFIG_X86_IO_APIC */ #else /* !CONFIG_X86_IO_APIC */
......
...@@ -36,6 +36,7 @@ static inline int irq_canonicalize(int irq) ...@@ -36,6 +36,7 @@ static inline int irq_canonicalize(int irq)
extern void fixup_irqs(void); extern void fixup_irqs(void);
#endif #endif
extern void (*generic_interrupt_extension)(void);
extern void init_IRQ(void); extern void init_IRQ(void);
extern void native_init_IRQ(void); extern void native_init_IRQ(void);
extern bool handle_irq(unsigned irq, struct pt_regs *regs); extern bool handle_irq(unsigned irq, struct pt_regs *regs);
......
#ifndef _ASM_X86_IRQ_REMAPPING_H #ifndef _ASM_X86_IRQ_REMAPPING_H
#define _ASM_X86_IRQ_REMAPPING_H #define _ASM_X86_IRQ_REMAPPING_H
extern int x2apic;
#define IRTE_DEST(dest) ((x2apic) ? dest : dest << 8) #define IRTE_DEST(dest) ((x2apic) ? dest : dest << 8)
#endif /* _ASM_X86_IRQ_REMAPPING_H */ #endif /* _ASM_X86_IRQ_REMAPPING_H */
...@@ -111,6 +111,11 @@ ...@@ -111,6 +111,11 @@
*/ */
#define LOCAL_PERF_VECTOR 0xee #define LOCAL_PERF_VECTOR 0xee
/*
* Generic system vector for platform specific use
*/
#define GENERIC_INTERRUPT_VECTOR 0xed
/* /*
* First APIC vector available to drivers: (vectors 0x30-0xee) we * First APIC vector available to drivers: (vectors 0x30-0xee) we
* start at 0x31(0x41) to spread out vectors evenly between priority * start at 0x31(0x41) to spread out vectors evenly between priority
......
...@@ -9,13 +9,13 @@ ...@@ -9,13 +9,13 @@
# define PAGES_NR 4 # define PAGES_NR 4
#else #else
# define PA_CONTROL_PAGE 0 # define PA_CONTROL_PAGE 0
# define PA_TABLE_PAGE 1 # define VA_CONTROL_PAGE 1
# define PAGES_NR 2 # define PA_TABLE_PAGE 2
# define PA_SWAP_PAGE 3
# define PAGES_NR 4
#endif #endif
#ifdef CONFIG_X86_32
# define KEXEC_CONTROL_CODE_MAX_SIZE 2048 # define KEXEC_CONTROL_CODE_MAX_SIZE 2048
#endif
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
...@@ -136,10 +136,11 @@ relocate_kernel(unsigned long indirection_page, ...@@ -136,10 +136,11 @@ relocate_kernel(unsigned long indirection_page,
unsigned int has_pae, unsigned int has_pae,
unsigned int preserve_context); unsigned int preserve_context);
#else #else
NORET_TYPE void unsigned long
relocate_kernel(unsigned long indirection_page, relocate_kernel(unsigned long indirection_page,
unsigned long page_list, unsigned long page_list,
unsigned long start_address) ATTRIB_NORET; unsigned long start_address,
unsigned int preserve_context);
#endif #endif
#define ARCH_HAS_KIMAGE_ARCH #define ARCH_HAS_KIMAGE_ARCH
......
#ifndef _ASM_X86_LINKAGE_H #ifndef _ASM_X86_LINKAGE_H
#define _ASM_X86_LINKAGE_H #define _ASM_X86_LINKAGE_H
#include <linux/stringify.h>
#undef notrace #undef notrace
#define notrace __attribute__((no_instrument_function)) #define notrace __attribute__((no_instrument_function))
#ifdef CONFIG_X86_64
#define __ALIGN .p2align 4,,15
#define __ALIGN_STR ".p2align 4,,15"
#endif
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
#define asmlinkage CPP_ASMLINKAGE __attribute__((regparm(0))) #define asmlinkage CPP_ASMLINKAGE __attribute__((regparm(0)))
/* /*
...@@ -50,16 +47,20 @@ ...@@ -50,16 +47,20 @@
__asmlinkage_protect_n(ret, "g" (arg1), "g" (arg2), "g" (arg3), \ __asmlinkage_protect_n(ret, "g" (arg1), "g" (arg2), "g" (arg3), \
"g" (arg4), "g" (arg5), "g" (arg6)) "g" (arg4), "g" (arg5), "g" (arg6))
#endif #endif /* CONFIG_X86_32 */
#ifdef __ASSEMBLY__
#define GLOBAL(name) \ #define GLOBAL(name) \
.globl name; \ .globl name; \
name: name:
#ifdef CONFIG_X86_ALIGNMENT_16 #if defined(CONFIG_X86_64) || defined(CONFIG_X86_ALIGNMENT_16)
#define __ALIGN .align 16,0x90 #define __ALIGN .p2align 4, 0x90
#define __ALIGN_STR ".align 16,0x90" #define __ALIGN_STR __stringify(__ALIGN)
#endif #endif
#endif /* __ASSEMBLY__ */
#endif /* _ASM_X86_LINKAGE_H */ #endif /* _ASM_X86_LINKAGE_H */
...@@ -11,6 +11,8 @@ ...@@ -11,6 +11,8 @@
*/ */
#define MCG_CTL_P (1UL<<8) /* MCG_CAP register available */ #define MCG_CTL_P (1UL<<8) /* MCG_CAP register available */
#define MCG_EXT_P (1ULL<<9) /* Extended registers available */
#define MCG_CMCI_P (1ULL<<10) /* CMCI supported */
#define MCG_STATUS_RIPV (1UL<<0) /* restart ip valid */ #define MCG_STATUS_RIPV (1UL<<0) /* restart ip valid */
#define MCG_STATUS_EIPV (1UL<<1) /* ip points to correct instruction */ #define MCG_STATUS_EIPV (1UL<<1) /* ip points to correct instruction */
...@@ -90,14 +92,29 @@ extern int mce_disabled; ...@@ -90,14 +92,29 @@ extern int mce_disabled;
#include <asm/atomic.h> #include <asm/atomic.h>
void mce_setup(struct mce *m);
void mce_log(struct mce *m); void mce_log(struct mce *m);
DECLARE_PER_CPU(struct sys_device, device_mce); DECLARE_PER_CPU(struct sys_device, device_mce);
extern void (*threshold_cpu_callback)(unsigned long action, unsigned int cpu); extern void (*threshold_cpu_callback)(unsigned long action, unsigned int cpu);
/*
* To support more than 128 would need to escape the predefined
* Linux defined extended banks first.
*/
#define MAX_NR_BANKS (MCE_EXTENDED_BANK - 1)
#ifdef CONFIG_X86_MCE_INTEL #ifdef CONFIG_X86_MCE_INTEL
void mce_intel_feature_init(struct cpuinfo_x86 *c); void mce_intel_feature_init(struct cpuinfo_x86 *c);
void cmci_clear(void);
void cmci_reenable(void);
void cmci_rediscover(int dying);
void cmci_recheck(void);
#else #else
static inline void mce_intel_feature_init(struct cpuinfo_x86 *c) { } static inline void mce_intel_feature_init(struct cpuinfo_x86 *c) { }
static inline void cmci_clear(void) {}
static inline void cmci_reenable(void) {}
static inline void cmci_rediscover(int dying) {}
static inline void cmci_recheck(void) {}
#endif #endif
#ifdef CONFIG_X86_MCE_AMD #ifdef CONFIG_X86_MCE_AMD
...@@ -106,11 +123,23 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c); ...@@ -106,11 +123,23 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c);
static inline void mce_amd_feature_init(struct cpuinfo_x86 *c) { } static inline void mce_amd_feature_init(struct cpuinfo_x86 *c) { }
#endif #endif
void mce_log_therm_throt_event(unsigned int cpu, __u64 status); extern int mce_available(struct cpuinfo_x86 *c);
void mce_log_therm_throt_event(__u64 status);
extern atomic_t mce_entry; extern atomic_t mce_entry;
extern void do_machine_check(struct pt_regs *, long); extern void do_machine_check(struct pt_regs *, long);
typedef DECLARE_BITMAP(mce_banks_t, MAX_NR_BANKS);
DECLARE_PER_CPU(mce_banks_t, mce_poll_banks);
enum mcp_flags {
MCP_TIMESTAMP = (1 << 0), /* log time stamp */
MCP_UC = (1 << 1), /* log uncorrected errors */
};
extern void machine_check_poll(enum mcp_flags flags, mce_banks_t *b);
extern int mce_notify_user(void); extern int mce_notify_user(void);
#endif /* !CONFIG_X86_32 */ #endif /* !CONFIG_X86_32 */
...@@ -120,8 +149,8 @@ extern void mcheck_init(struct cpuinfo_x86 *c); ...@@ -120,8 +149,8 @@ extern void mcheck_init(struct cpuinfo_x86 *c);
#else #else
#define mcheck_init(c) do { } while (0) #define mcheck_init(c) do { } while (0)
#endif #endif
extern void stop_mce(void);
extern void restart_mce(void); extern void (*mce_threshold_vector)(void);
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _ASM_X86_MCE_H */ #endif /* _ASM_X86_MCE_H */
...@@ -47,6 +47,7 @@ ...@@ -47,6 +47,7 @@
#define MSI_ADDR_DEST_ID_MASK 0x00ffff0 #define MSI_ADDR_DEST_ID_MASK 0x00ffff0
#define MSI_ADDR_DEST_ID(dest) (((dest) << MSI_ADDR_DEST_ID_SHIFT) & \ #define MSI_ADDR_DEST_ID(dest) (((dest) << MSI_ADDR_DEST_ID_SHIFT) & \
MSI_ADDR_DEST_ID_MASK) MSI_ADDR_DEST_ID_MASK)
#define MSI_ADDR_EXT_DEST_ID(dest) ((dest) & 0xffffff00)
#define MSI_ADDR_IR_EXT_INT (1 << 4) #define MSI_ADDR_IR_EXT_INT (1 << 4)
#define MSI_ADDR_IR_SHV (1 << 3) #define MSI_ADDR_IR_SHV (1 << 3)
......
...@@ -81,6 +81,11 @@ ...@@ -81,6 +81,11 @@
#define MSR_IA32_MC0_ADDR 0x00000402 #define MSR_IA32_MC0_ADDR 0x00000402
#define MSR_IA32_MC0_MISC 0x00000403 #define MSR_IA32_MC0_MISC 0x00000403
/* These are consecutive and not in the normal 4er MCE bank block */
#define MSR_IA32_MC0_CTL2 0x00000280
#define CMCI_EN (1ULL << 30)
#define CMCI_THRESHOLD_MASK 0xffffULL
#define MSR_P6_PERFCTR0 0x000000c1 #define MSR_P6_PERFCTR0 0x000000c1
#define MSR_P6_PERFCTR1 0x000000c2 #define MSR_P6_PERFCTR1 0x000000c2
#define MSR_P6_EVNTSEL0 0x00000186 #define MSR_P6_EVNTSEL0 0x00000186
......
...@@ -39,6 +39,11 @@ ...@@ -39,6 +39,11 @@
#define __VIRTUAL_MASK_SHIFT 32 #define __VIRTUAL_MASK_SHIFT 32
#endif /* CONFIG_X86_PAE */ #endif /* CONFIG_X86_PAE */
/*
* Kernel image size is limited to 512 MB (see in arch/x86/kernel/head_32.S)
*/
#define KERNEL_IMAGE_SIZE (512 * 1024 * 1024)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
/* /*
......
...@@ -40,14 +40,8 @@ ...@@ -40,14 +40,8 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
struct pgprot;
extern int page_is_ram(unsigned long pagenr); extern int page_is_ram(unsigned long pagenr);
extern int devmem_is_allowed(unsigned long pagenr); extern int devmem_is_allowed(unsigned long pagenr);
extern void map_devmem(unsigned long pfn, unsigned long size,
struct pgprot vma_prot);
extern void unmap_devmem(unsigned long pfn, unsigned long size,
struct pgprot vma_prot);
extern unsigned long max_low_pfn_mapped; extern unsigned long max_low_pfn_mapped;
extern unsigned long max_pfn_mapped; extern unsigned long max_pfn_mapped;
......
...@@ -317,8 +317,6 @@ struct pv_mmu_ops { ...@@ -317,8 +317,6 @@ struct pv_mmu_ops {
#if PAGETABLE_LEVELS >= 3 #if PAGETABLE_LEVELS >= 3
#ifdef CONFIG_X86_PAE #ifdef CONFIG_X86_PAE
void (*set_pte_atomic)(pte_t *ptep, pte_t pteval); void (*set_pte_atomic)(pte_t *ptep, pte_t pteval);
void (*set_pte_present)(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte);
void (*pte_clear)(struct mm_struct *mm, unsigned long addr, void (*pte_clear)(struct mm_struct *mm, unsigned long addr,
pte_t *ptep); pte_t *ptep);
void (*pmd_clear)(pmd_t *pmdp); void (*pmd_clear)(pmd_t *pmdp);
...@@ -389,7 +387,7 @@ extern struct pv_lock_ops pv_lock_ops; ...@@ -389,7 +387,7 @@ extern struct pv_lock_ops pv_lock_ops;
#define paravirt_type(op) \ #define paravirt_type(op) \
[paravirt_typenum] "i" (PARAVIRT_PATCH(op)), \ [paravirt_typenum] "i" (PARAVIRT_PATCH(op)), \
[paravirt_opptr] "m" (op) [paravirt_opptr] "i" (&(op))
#define paravirt_clobber(clobber) \ #define paravirt_clobber(clobber) \
[paravirt_clobber] "i" (clobber) [paravirt_clobber] "i" (clobber)
...@@ -443,7 +441,7 @@ int paravirt_disable_iospace(void); ...@@ -443,7 +441,7 @@ int paravirt_disable_iospace(void);
* offset into the paravirt_patch_template structure, and can therefore be * offset into the paravirt_patch_template structure, and can therefore be
* freely converted back into a structure offset. * freely converted back into a structure offset.
*/ */
#define PARAVIRT_CALL "call *%[paravirt_opptr];" #define PARAVIRT_CALL "call *%c[paravirt_opptr];"
/* /*
* These macros are intended to wrap calls through one of the paravirt * These macros are intended to wrap calls through one of the paravirt
...@@ -1365,13 +1363,6 @@ static inline void set_pte_atomic(pte_t *ptep, pte_t pte) ...@@ -1365,13 +1363,6 @@ static inline void set_pte_atomic(pte_t *ptep, pte_t pte)
pte.pte, pte.pte >> 32); pte.pte, pte.pte >> 32);
} }
static inline void set_pte_present(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte)
{
/* 5 arg words */
pv_mmu_ops.set_pte_present(mm, addr, ptep, pte);
}
static inline void pte_clear(struct mm_struct *mm, unsigned long addr, static inline void pte_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep) pte_t *ptep)
{ {
...@@ -1388,12 +1379,6 @@ static inline void set_pte_atomic(pte_t *ptep, pte_t pte) ...@@ -1388,12 +1379,6 @@ static inline void set_pte_atomic(pte_t *ptep, pte_t pte)
set_pte(ptep, pte); set_pte(ptep, pte);
} }
static inline void set_pte_present(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte)
{
set_pte(ptep, pte);
}
static inline void pte_clear(struct mm_struct *mm, unsigned long addr, static inline void pte_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep) pte_t *ptep)
{ {
......
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
#define _ASM_X86_PAT_H #define _ASM_X86_PAT_H
#include <linux/types.h> #include <linux/types.h>
#include <asm/pgtable_types.h>
#ifdef CONFIG_X86_PAT #ifdef CONFIG_X86_PAT
extern int pat_enabled; extern int pat_enabled;
...@@ -17,5 +18,9 @@ extern int free_memtype(u64 start, u64 end); ...@@ -17,5 +18,9 @@ extern int free_memtype(u64 start, u64 end);
extern int kernel_map_sync_memtype(u64 base, unsigned long size, extern int kernel_map_sync_memtype(u64 base, unsigned long size,
unsigned long flag); unsigned long flag);
extern void map_devmem(unsigned long pfn, unsigned long size,
struct pgprot vma_prot);
extern void unmap_devmem(unsigned long pfn, unsigned long size,
struct pgprot vma_prot);
#endif /* _ASM_X86_PAT_H */ #endif /* _ASM_X86_PAT_H */
...@@ -26,13 +26,6 @@ static inline void native_set_pte_atomic(pte_t *ptep, pte_t pte) ...@@ -26,13 +26,6 @@ static inline void native_set_pte_atomic(pte_t *ptep, pte_t pte)
native_set_pte(ptep, pte); native_set_pte(ptep, pte);
} }
static inline void native_set_pte_present(struct mm_struct *mm,
unsigned long addr,
pte_t *ptep, pte_t pte)
{
native_set_pte(ptep, pte);
}
static inline void native_pmd_clear(pmd_t *pmdp) static inline void native_pmd_clear(pmd_t *pmdp)
{ {
native_set_pmd(pmdp, __pmd(0)); native_set_pmd(pmdp, __pmd(0));
......
...@@ -31,23 +31,6 @@ static inline void native_set_pte(pte_t *ptep, pte_t pte) ...@@ -31,23 +31,6 @@ static inline void native_set_pte(pte_t *ptep, pte_t pte)
ptep->pte_low = pte.pte_low; ptep->pte_low = pte.pte_low;
} }
/*
* Since this is only called on user PTEs, and the page fault handler
* must handle the already racy situation of simultaneous page faults,
* we are justified in merely clearing the PTE present bit, followed
* by a set. The ordering here is important.
*/
static inline void native_set_pte_present(struct mm_struct *mm,
unsigned long addr,
pte_t *ptep, pte_t pte)
{
ptep->pte_low = 0;
smp_wmb();
ptep->pte_high = pte.pte_high;
smp_wmb();
ptep->pte_low = pte.pte_low;
}
static inline void native_set_pte_atomic(pte_t *ptep, pte_t pte) static inline void native_set_pte_atomic(pte_t *ptep, pte_t pte)
{ {
set_64bit((unsigned long long *)(ptep), native_pte_val(pte)); set_64bit((unsigned long long *)(ptep), native_pte_val(pte));
......
...@@ -31,8 +31,6 @@ extern struct list_head pgd_list; ...@@ -31,8 +31,6 @@ extern struct list_head pgd_list;
#define set_pte(ptep, pte) native_set_pte(ptep, pte) #define set_pte(ptep, pte) native_set_pte(ptep, pte)
#define set_pte_at(mm, addr, ptep, pte) native_set_pte_at(mm, addr, ptep, pte) #define set_pte_at(mm, addr, ptep, pte) native_set_pte_at(mm, addr, ptep, pte)
#define set_pte_present(mm, addr, ptep, pte) \
native_set_pte_present(mm, addr, ptep, pte)
#define set_pte_atomic(ptep, pte) \ #define set_pte_atomic(ptep, pte) \
native_set_pte_atomic(ptep, pte) native_set_pte_atomic(ptep, pte)
......
...@@ -42,9 +42,6 @@ extern void set_pmd_pfn(unsigned long, unsigned long, pgprot_t); ...@@ -42,9 +42,6 @@ extern void set_pmd_pfn(unsigned long, unsigned long, pgprot_t);
*/ */
#undef TEST_ACCESS_OK #undef TEST_ACCESS_OK
/* The boot page tables (all created as a single array) */
extern unsigned long pg0[];
#ifdef CONFIG_X86_PAE #ifdef CONFIG_X86_PAE
# include <asm/pgtable-3level.h> # include <asm/pgtable-3level.h>
#else #else
......
...@@ -25,6 +25,11 @@ ...@@ -25,6 +25,11 @@
* area for the same reason. ;) * area for the same reason. ;)
*/ */
#define VMALLOC_OFFSET (8 * 1024 * 1024) #define VMALLOC_OFFSET (8 * 1024 * 1024)
#ifndef __ASSEMBLER__
extern bool __vmalloc_start_set; /* set once high_memory is set */
#endif
#define VMALLOC_START ((unsigned long)high_memory + VMALLOC_OFFSET) #define VMALLOC_START ((unsigned long)high_memory + VMALLOC_OFFSET)
#ifdef CONFIG_X86_PAE #ifdef CONFIG_X86_PAE
#define LAST_PKMAP 512 #define LAST_PKMAP 512
......
...@@ -273,6 +273,7 @@ typedef struct page *pgtable_t; ...@@ -273,6 +273,7 @@ typedef struct page *pgtable_t;
extern pteval_t __supported_pte_mask; extern pteval_t __supported_pte_mask;
extern int nx_enabled; extern int nx_enabled;
extern void set_nx(void);
#define pgprot_writecombine pgprot_writecombine #define pgprot_writecombine pgprot_writecombine
extern pgprot_t pgprot_writecombine(pgprot_t prot); extern pgprot_t pgprot_writecombine(pgprot_t prot);
......
...@@ -75,9 +75,9 @@ struct cpuinfo_x86 { ...@@ -75,9 +75,9 @@ struct cpuinfo_x86 {
#else #else
/* Number of 4K pages in DTLB/ITLB combined(in pages): */ /* Number of 4K pages in DTLB/ITLB combined(in pages): */
int x86_tlbsize; int x86_tlbsize;
#endif
__u8 x86_virt_bits; __u8 x86_virt_bits;
__u8 x86_phys_bits; __u8 x86_phys_bits;
#endif
/* CPUID returned core id bits: */ /* CPUID returned core id bits: */
__u8 x86_coreid_bits; __u8 x86_coreid_bits;
/* Max extended CPUID function supported: */ /* Max extended CPUID function supported: */
...@@ -391,6 +391,9 @@ DECLARE_PER_CPU(union irq_stack_union, irq_stack_union); ...@@ -391,6 +391,9 @@ DECLARE_PER_CPU(union irq_stack_union, irq_stack_union);
DECLARE_INIT_PER_CPU(irq_stack_union); DECLARE_INIT_PER_CPU(irq_stack_union);
DECLARE_PER_CPU(char *, irq_stack_ptr); DECLARE_PER_CPU(char *, irq_stack_ptr);
DECLARE_PER_CPU(unsigned int, irq_count);
extern unsigned long kernel_eflags;
extern asmlinkage void ignore_sysret(void);
#else /* X86_64 */ #else /* X86_64 */
#ifdef CONFIG_CC_STACKPROTECTOR #ifdef CONFIG_CC_STACKPROTECTOR
DECLARE_PER_CPU(unsigned long, stack_canary); DECLARE_PER_CPU(unsigned long, stack_canary);
......
#ifndef _ASM_X86_SECTIONS_H
#define _ASM_X86_SECTIONS_H
#include <asm-generic/sections.h> #include <asm-generic/sections.h>
extern char __brk_base[], __brk_limit[];
#endif /* _ASM_X86_SECTIONS_H */
...@@ -64,7 +64,7 @@ extern void x86_quirk_time_init(void); ...@@ -64,7 +64,7 @@ extern void x86_quirk_time_init(void);
#include <asm/bootparam.h> #include <asm/bootparam.h>
/* Interrupt control for vSMPowered x86_64 systems */ /* Interrupt control for vSMPowered x86_64 systems */
#ifdef CONFIG_X86_VSMP #ifdef CONFIG_X86_64
void vsmp_init(void); void vsmp_init(void);
#else #else
static inline void vsmp_init(void) { } static inline void vsmp_init(void) { }
...@@ -100,20 +100,51 @@ extern struct boot_params boot_params; ...@@ -100,20 +100,51 @@ extern struct boot_params boot_params;
*/ */
#define LOWMEMSIZE() (0x9f000) #define LOWMEMSIZE() (0x9f000)
/* exceedingly early brk-like allocator */
extern unsigned long _brk_end;
void *extend_brk(size_t size, size_t align);
/*
* Reserve space in the brk section. The name must be unique within
* the file, and somewhat descriptive. The size is in bytes. Must be
* used at file scope.
*
* (This uses a temp function to wrap the asm so we can pass it the
* size parameter; otherwise we wouldn't be able to. We can't use a
* "section" attribute on a normal variable because it always ends up
* being @progbits, which ends up allocating space in the vmlinux
* executable.)
*/
#define RESERVE_BRK(name,sz) \
static void __section(.discard) __used \
__brk_reservation_fn_##name##__(void) { \
asm volatile ( \
".pushsection .brk_reservation,\"aw\",@nobits;" \
".brk." #name ":" \
" 1:.skip %c0;" \
" .size .brk." #name ", . - 1b;" \
" .popsection" \
: : "i" (sz)); \
}
#ifdef __i386__ #ifdef __i386__
void __init i386_start_kernel(void); void __init i386_start_kernel(void);
extern void probe_roms(void); extern void probe_roms(void);
extern unsigned long init_pg_tables_start;
extern unsigned long init_pg_tables_end;
#else #else
void __init x86_64_start_kernel(char *real_mode); void __init x86_64_start_kernel(char *real_mode);
void __init x86_64_start_reservations(char *real_mode_data); void __init x86_64_start_reservations(char *real_mode_data);
#endif /* __i386__ */ #endif /* __i386__ */
#endif /* _SETUP */ #endif /* _SETUP */
#else
#define RESERVE_BRK(name,sz) \
.pushsection .brk_reservation,"aw",@nobits; \
.brk.name: \
1: .skip sz; \
.size .brk.name,.-1b; \
.popsection
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
......
...@@ -199,6 +199,10 @@ DECLARE_PER_CPU(struct uv_hub_info_s, __uv_hub_info); ...@@ -199,6 +199,10 @@ DECLARE_PER_CPU(struct uv_hub_info_s, __uv_hub_info);
#define SCIR_CPU_ACTIVITY 0x02 /* not idle */ #define SCIR_CPU_ACTIVITY 0x02 /* not idle */
#define SCIR_CPU_HB_INTERVAL (HZ) /* once per second */ #define SCIR_CPU_HB_INTERVAL (HZ) /* once per second */
/* Loop through all installed blades */
#define for_each_possible_blade(bid) \
for ((bid) = 0; (bid) < uv_num_possible_blades(); (bid)++)
/* /*
* Macros for converting between kernel virtual addresses, socket local physical * Macros for converting between kernel virtual addresses, socket local physical
* addresses, and UV global physical addresses. * addresses, and UV global physical addresses.
......
...@@ -296,6 +296,8 @@ HYPERVISOR_get_debugreg(int reg) ...@@ -296,6 +296,8 @@ HYPERVISOR_get_debugreg(int reg)
static inline int static inline int
HYPERVISOR_update_descriptor(u64 ma, u64 desc) HYPERVISOR_update_descriptor(u64 ma, u64 desc)
{ {
if (sizeof(u64) == sizeof(long))
return _hypercall2(int, update_descriptor, ma, desc);
return _hypercall4(int, update_descriptor, ma, ma>>32, desc, desc>>32); return _hypercall4(int, update_descriptor, ma, ma>>32, desc, desc>>32);
} }
......
...@@ -70,7 +70,6 @@ obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o ...@@ -70,7 +70,6 @@ obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o
obj-$(CONFIG_KEXEC) += machine_kexec_$(BITS).o obj-$(CONFIG_KEXEC) += machine_kexec_$(BITS).o
obj-$(CONFIG_KEXEC) += relocate_kernel_$(BITS).o crash.o obj-$(CONFIG_KEXEC) += relocate_kernel_$(BITS).o crash.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump_$(BITS).o obj-$(CONFIG_CRASH_DUMP) += crash_dump_$(BITS).o
obj-$(CONFIG_X86_VSMP) += vsmp_64.o
obj-$(CONFIG_KPROBES) += kprobes.o obj-$(CONFIG_KPROBES) += kprobes.o
obj-$(CONFIG_MODULES) += module_$(BITS).o obj-$(CONFIG_MODULES) += module_$(BITS).o
obj-$(CONFIG_EFI) += efi.o efi_$(BITS).o efi_stub_$(BITS).o obj-$(CONFIG_EFI) += efi.o efi_$(BITS).o efi_stub_$(BITS).o
...@@ -111,7 +110,7 @@ obj-$(CONFIG_SWIOTLB) += pci-swiotlb_64.o # NB rename without _64 ...@@ -111,7 +110,7 @@ obj-$(CONFIG_SWIOTLB) += pci-swiotlb_64.o # NB rename without _64
### ###
# 64 bit specific files # 64 bit specific files
ifeq ($(CONFIG_X86_64),y) ifeq ($(CONFIG_X86_64),y)
obj-$(CONFIG_X86_UV) += tlb_uv.o bios_uv.o uv_irq.o uv_sysfs.o obj-$(CONFIG_X86_UV) += tlb_uv.o bios_uv.o uv_irq.o uv_sysfs.o uv_time.o
obj-$(CONFIG_X86_PM_TIMER) += pmtimer_64.o obj-$(CONFIG_X86_PM_TIMER) += pmtimer_64.o
obj-$(CONFIG_AUDIT) += audit_64.o obj-$(CONFIG_AUDIT) += audit_64.o
...@@ -120,4 +119,5 @@ ifeq ($(CONFIG_X86_64),y) ...@@ -120,4 +119,5 @@ ifeq ($(CONFIG_X86_64),y)
obj-$(CONFIG_AMD_IOMMU) += amd_iommu_init.o amd_iommu.o obj-$(CONFIG_AMD_IOMMU) += amd_iommu_init.o amd_iommu.o
obj-$(CONFIG_PCI_MMCONFIG) += mmconf-fam10h_64.o obj-$(CONFIG_PCI_MMCONFIG) += mmconf-fam10h_64.o
obj-y += vsmp_64.o
endif endif
...@@ -414,9 +414,17 @@ void __init alternative_instructions(void) ...@@ -414,9 +414,17 @@ void __init alternative_instructions(void)
that might execute the to be patched code. that might execute the to be patched code.
Other CPUs are not running. */ Other CPUs are not running. */
stop_nmi(); stop_nmi();
#ifdef CONFIG_X86_MCE
stop_mce(); /*
#endif * Don't stop machine check exceptions while patching.
* MCEs only happen when something got corrupted and in this
* case we must do something about the corruption.
* Ignoring it is worse than a unlikely patching race.
* Also machine checks tend to be broadcast and if one CPU
* goes into machine check the others follow quickly, so we don't
* expect a machine check to cause undue problems during to code
* patching.
*/
apply_alternatives(__alt_instructions, __alt_instructions_end); apply_alternatives(__alt_instructions, __alt_instructions_end);
...@@ -456,9 +464,6 @@ void __init alternative_instructions(void) ...@@ -456,9 +464,6 @@ void __init alternative_instructions(void)
(unsigned long)__smp_locks_end); (unsigned long)__smp_locks_end);
restart_nmi(); restart_nmi();
#ifdef CONFIG_X86_MCE
restart_mce();
#endif
} }
/** /**
......
...@@ -46,6 +46,7 @@ ...@@ -46,6 +46,7 @@
#include <asm/idle.h> #include <asm/idle.h>
#include <asm/mtrr.h> #include <asm/mtrr.h>
#include <asm/smp.h> #include <asm/smp.h>
#include <asm/mce.h>
unsigned int num_processors; unsigned int num_processors;
...@@ -808,7 +809,7 @@ void clear_local_APIC(void) ...@@ -808,7 +809,7 @@ void clear_local_APIC(void)
u32 v; u32 v;
/* APIC hasn't been mapped yet */ /* APIC hasn't been mapped yet */
if (!apic_phys) if (!x2apic && !apic_phys)
return; return;
maxlvt = lapic_get_maxlvt(); maxlvt = lapic_get_maxlvt();
...@@ -842,6 +843,14 @@ void clear_local_APIC(void) ...@@ -842,6 +843,14 @@ void clear_local_APIC(void)
apic_write(APIC_LVTTHMR, v | APIC_LVT_MASKED); apic_write(APIC_LVTTHMR, v | APIC_LVT_MASKED);
} }
#endif #endif
#ifdef CONFIG_X86_MCE_INTEL
if (maxlvt >= 6) {
v = apic_read(APIC_LVTCMCI);
if (!(v & APIC_LVT_MASKED))
apic_write(APIC_LVTCMCI, v | APIC_LVT_MASKED);
}
#endif
/* /*
* Clean APIC state for other OSs: * Clean APIC state for other OSs:
*/ */
...@@ -1241,6 +1250,12 @@ void __cpuinit setup_local_APIC(void) ...@@ -1241,6 +1250,12 @@ void __cpuinit setup_local_APIC(void)
apic_write(APIC_LVT1, value); apic_write(APIC_LVT1, value);
preempt_enable(); preempt_enable();
#ifdef CONFIG_X86_MCE_INTEL
/* Recheck CMCI information after local APIC is up on CPU #0 */
if (smp_processor_id() == 0)
cmci_recheck();
#endif
} }
void __cpuinit end_local_APIC_setup(void) void __cpuinit end_local_APIC_setup(void)
...@@ -1319,15 +1334,16 @@ void __init enable_IR_x2apic(void) ...@@ -1319,15 +1334,16 @@ void __init enable_IR_x2apic(void)
return; return;
} }
local_irq_save(flags); ret = save_IO_APIC_setup();
mask_8259A();
ret = save_mask_IO_APIC_setup();
if (ret) { if (ret) {
pr_info("Saving IO-APIC state failed: %d\n", ret); pr_info("Saving IO-APIC state failed: %d\n", ret);
goto end; goto end;
} }
local_irq_save(flags);
mask_IO_APIC_setup();
mask_8259A();
ret = enable_intr_remapping(1); ret = enable_intr_remapping(1);
if (ret && x2apic_preenabled) { if (ret && x2apic_preenabled) {
...@@ -1352,10 +1368,10 @@ void __init enable_IR_x2apic(void) ...@@ -1352,10 +1368,10 @@ void __init enable_IR_x2apic(void)
else else
reinit_intr_remapped_IO_APIC(x2apic_preenabled); reinit_intr_remapped_IO_APIC(x2apic_preenabled);
end:
unmask_8259A(); unmask_8259A();
local_irq_restore(flags); local_irq_restore(flags);
end:
if (!ret) { if (!ret) {
if (!x2apic_preenabled) if (!x2apic_preenabled)
pr_info("Enabled x2apic and interrupt-remapping\n"); pr_info("Enabled x2apic and interrupt-remapping\n");
...@@ -1508,12 +1524,10 @@ void __init early_init_lapic_mapping(void) ...@@ -1508,12 +1524,10 @@ void __init early_init_lapic_mapping(void)
*/ */
void __init init_apic_mappings(void) void __init init_apic_mappings(void)
{ {
#ifdef CONFIG_X86_X2APIC
if (x2apic) { if (x2apic) {
boot_cpu_physical_apicid = read_apic_id(); boot_cpu_physical_apicid = read_apic_id();
return; return;
} }
#endif
/* /*
* If no local APIC can be found then set up a fake all * If no local APIC can be found then set up a fake all
...@@ -1957,12 +1971,9 @@ static int lapic_resume(struct sys_device *dev) ...@@ -1957,12 +1971,9 @@ static int lapic_resume(struct sys_device *dev)
local_irq_save(flags); local_irq_save(flags);
#ifdef CONFIG_X86_X2APIC
if (x2apic) if (x2apic)
enable_x2apic(); enable_x2apic();
else else {
#endif
{
/* /*
* Make sure the APICBASE points to the right address * Make sure the APICBASE points to the right address
* *
......
...@@ -159,20 +159,6 @@ static int flat_apic_id_registered(void) ...@@ -159,20 +159,6 @@ static int flat_apic_id_registered(void)
return physid_isset(read_xapic_id(), phys_cpu_present_map); return physid_isset(read_xapic_id(), phys_cpu_present_map);
} }
static unsigned int flat_cpu_mask_to_apicid(const struct cpumask *cpumask)
{
return cpumask_bits(cpumask)[0] & APIC_ALL_CPUS;
}
static unsigned int flat_cpu_mask_to_apicid_and(const struct cpumask *cpumask,
const struct cpumask *andmask)
{
unsigned long mask1 = cpumask_bits(cpumask)[0] & APIC_ALL_CPUS;
unsigned long mask2 = cpumask_bits(andmask)[0] & APIC_ALL_CPUS;
return mask1 & mask2;
}
static int flat_phys_pkg_id(int initial_apic_id, int index_msb) static int flat_phys_pkg_id(int initial_apic_id, int index_msb)
{ {
return hard_smp_processor_id() >> index_msb; return hard_smp_processor_id() >> index_msb;
...@@ -213,8 +199,8 @@ struct apic apic_flat = { ...@@ -213,8 +199,8 @@ struct apic apic_flat = {
.set_apic_id = set_apic_id, .set_apic_id = set_apic_id,
.apic_id_mask = 0xFFu << 24, .apic_id_mask = 0xFFu << 24,
.cpu_mask_to_apicid = flat_cpu_mask_to_apicid, .cpu_mask_to_apicid = default_cpu_mask_to_apicid,
.cpu_mask_to_apicid_and = flat_cpu_mask_to_apicid_and, .cpu_mask_to_apicid_and = default_cpu_mask_to_apicid_and,
.send_IPI_mask = flat_send_IPI_mask, .send_IPI_mask = flat_send_IPI_mask,
.send_IPI_mask_allbutself = flat_send_IPI_mask_allbutself, .send_IPI_mask_allbutself = flat_send_IPI_mask_allbutself,
......
This diff is collapsed.
...@@ -68,6 +68,13 @@ void __init default_setup_apic_routing(void) ...@@ -68,6 +68,13 @@ void __init default_setup_apic_routing(void)
apic = &apic_physflat; apic = &apic_physflat;
printk(KERN_INFO "Setting APIC routing to %s\n", apic->name); printk(KERN_INFO "Setting APIC routing to %s\n", apic->name);
} }
/*
* Now that apic routing model is selected, configure the
* fault handling for intr remapping.
*/
if (intr_remapping_enabled)
enable_drhd_fault_handling();
} }
/* Same for both flat and physical. */ /* Same for both flat and physical. */
......
...@@ -57,6 +57,8 @@ static void x2apic_send_IPI_mask(const struct cpumask *mask, int vector) ...@@ -57,6 +57,8 @@ static void x2apic_send_IPI_mask(const struct cpumask *mask, int vector)
unsigned long query_cpu; unsigned long query_cpu;
unsigned long flags; unsigned long flags;
x2apic_wrmsr_fence();
local_irq_save(flags); local_irq_save(flags);
for_each_cpu(query_cpu, mask) { for_each_cpu(query_cpu, mask) {
__x2apic_send_IPI_dest( __x2apic_send_IPI_dest(
...@@ -73,6 +75,8 @@ static void ...@@ -73,6 +75,8 @@ static void
unsigned long query_cpu; unsigned long query_cpu;
unsigned long flags; unsigned long flags;
x2apic_wrmsr_fence();
local_irq_save(flags); local_irq_save(flags);
for_each_cpu(query_cpu, mask) { for_each_cpu(query_cpu, mask) {
if (query_cpu == this_cpu) if (query_cpu == this_cpu)
...@@ -90,6 +94,8 @@ static void x2apic_send_IPI_allbutself(int vector) ...@@ -90,6 +94,8 @@ static void x2apic_send_IPI_allbutself(int vector)
unsigned long query_cpu; unsigned long query_cpu;
unsigned long flags; unsigned long flags;
x2apic_wrmsr_fence();
local_irq_save(flags); local_irq_save(flags);
for_each_online_cpu(query_cpu) { for_each_online_cpu(query_cpu) {
if (query_cpu == this_cpu) if (query_cpu == this_cpu)
......
...@@ -58,6 +58,8 @@ static void x2apic_send_IPI_mask(const struct cpumask *mask, int vector) ...@@ -58,6 +58,8 @@ static void x2apic_send_IPI_mask(const struct cpumask *mask, int vector)
unsigned long query_cpu; unsigned long query_cpu;
unsigned long flags; unsigned long flags;
x2apic_wrmsr_fence();
local_irq_save(flags); local_irq_save(flags);
for_each_cpu(query_cpu, mask) { for_each_cpu(query_cpu, mask) {
__x2apic_send_IPI_dest(per_cpu(x86_cpu_to_apicid, query_cpu), __x2apic_send_IPI_dest(per_cpu(x86_cpu_to_apicid, query_cpu),
...@@ -73,6 +75,8 @@ static void ...@@ -73,6 +75,8 @@ static void
unsigned long query_cpu; unsigned long query_cpu;
unsigned long flags; unsigned long flags;
x2apic_wrmsr_fence();
local_irq_save(flags); local_irq_save(flags);
for_each_cpu(query_cpu, mask) { for_each_cpu(query_cpu, mask) {
if (query_cpu != this_cpu) if (query_cpu != this_cpu)
...@@ -89,6 +93,8 @@ static void x2apic_send_IPI_allbutself(int vector) ...@@ -89,6 +93,8 @@ static void x2apic_send_IPI_allbutself(int vector)
unsigned long query_cpu; unsigned long query_cpu;
unsigned long flags; unsigned long flags;
x2apic_wrmsr_fence();
local_irq_save(flags); local_irq_save(flags);
for_each_online_cpu(query_cpu) { for_each_online_cpu(query_cpu) {
if (query_cpu == this_cpu) if (query_cpu == this_cpu)
......
...@@ -83,15 +83,15 @@ void __init setup_bios_corruption_check(void) ...@@ -83,15 +83,15 @@ void __init setup_bios_corruption_check(void)
u64 size; u64 size;
addr = find_e820_area_size(addr, &size, PAGE_SIZE); addr = find_e820_area_size(addr, &size, PAGE_SIZE);
if (addr == 0) if (!(addr + 1))
break;
if (addr >= corruption_check_size)
break; break;
if ((addr + size) > corruption_check_size) if ((addr + size) > corruption_check_size)
size = corruption_check_size - addr; size = corruption_check_size - addr;
if (size == 0)
break;
e820_update_range(addr, size, E820_RAM, E820_RESERVED); e820_update_range(addr, size, E820_RAM, E820_RESERVED);
scan_areas[num_scan_areas].addr = addr; scan_areas[num_scan_areas].addr = addr;
scan_areas[num_scan_areas].size = size; scan_areas[num_scan_areas].size = size;
......
...@@ -14,11 +14,12 @@ obj-y += vmware.o hypervisor.o ...@@ -14,11 +14,12 @@ obj-y += vmware.o hypervisor.o
obj-$(CONFIG_X86_32) += bugs.o cmpxchg.o obj-$(CONFIG_X86_32) += bugs.o cmpxchg.o
obj-$(CONFIG_X86_64) += bugs_64.o obj-$(CONFIG_X86_64) += bugs_64.o
obj-$(CONFIG_X86_CPU_DEBUG) += cpu_debug.o
obj-$(CONFIG_CPU_SUP_INTEL) += intel.o obj-$(CONFIG_CPU_SUP_INTEL) += intel.o
obj-$(CONFIG_CPU_SUP_AMD) += amd.o obj-$(CONFIG_CPU_SUP_AMD) += amd.o
obj-$(CONFIG_CPU_SUP_CYRIX_32) += cyrix.o obj-$(CONFIG_CPU_SUP_CYRIX_32) += cyrix.o
obj-$(CONFIG_CPU_SUP_CENTAUR_32) += centaur.o obj-$(CONFIG_CPU_SUP_CENTAUR) += centaur.o
obj-$(CONFIG_CPU_SUP_CENTAUR_64) += centaur_64.o
obj-$(CONFIG_CPU_SUP_TRANSMETA_32) += transmeta.o obj-$(CONFIG_CPU_SUP_TRANSMETA_32) += transmeta.o
obj-$(CONFIG_CPU_SUP_UMC_32) += umc.o obj-$(CONFIG_CPU_SUP_UMC_32) += umc.o
......
...@@ -29,7 +29,7 @@ void __cpuinit init_scattered_cpuid_features(struct cpuinfo_x86 *c) ...@@ -29,7 +29,7 @@ void __cpuinit init_scattered_cpuid_features(struct cpuinfo_x86 *c)
u32 regs[4]; u32 regs[4];
const struct cpuid_bit *cb; const struct cpuid_bit *cb;
static const struct cpuid_bit cpuid_bits[] = { static const struct cpuid_bit __cpuinitconst cpuid_bits[] = {
{ X86_FEATURE_IDA, CR_EAX, 1, 0x00000006 }, { X86_FEATURE_IDA, CR_EAX, 1, 0x00000006 },
{ 0, 0, 0, 0 } { 0, 0, 0, 0 }
}; };
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
#include <asm/io.h> #include <asm/io.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/apic.h> #include <asm/apic.h>
#include <asm/cpu.h>
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
# include <asm/numa_64.h> # include <asm/numa_64.h>
...@@ -141,6 +142,55 @@ static void __cpuinit init_amd_k6(struct cpuinfo_x86 *c) ...@@ -141,6 +142,55 @@ static void __cpuinit init_amd_k6(struct cpuinfo_x86 *c)
} }
} }
static void __cpuinit amd_k7_smp_check(struct cpuinfo_x86 *c)
{
#ifdef CONFIG_SMP
/* calling is from identify_secondary_cpu() ? */
if (c->cpu_index == boot_cpu_id)
return;
/*
* Certain Athlons might work (for various values of 'work') in SMP
* but they are not certified as MP capable.
*/
/* Athlon 660/661 is valid. */
if ((c->x86_model == 6) && ((c->x86_mask == 0) ||
(c->x86_mask == 1)))
goto valid_k7;
/* Duron 670 is valid */
if ((c->x86_model == 7) && (c->x86_mask == 0))
goto valid_k7;
/*
* Athlon 662, Duron 671, and Athlon >model 7 have capability
* bit. It's worth noting that the A5 stepping (662) of some
* Athlon XP's have the MP bit set.
* See http://www.heise.de/newsticker/data/jow-18.10.01-000 for
* more.
*/
if (((c->x86_model == 6) && (c->x86_mask >= 2)) ||
((c->x86_model == 7) && (c->x86_mask >= 1)) ||
(c->x86_model > 7))
if (cpu_has_mp)
goto valid_k7;
/* If we get here, not a certified SMP capable AMD system. */
/*
* Don't taint if we are running SMP kernel on a single non-MP
* approved Athlon
*/
WARN_ONCE(1, "WARNING: This combination of AMD"
"processors is not suitable for SMP.\n");
if (!test_taint(TAINT_UNSAFE_SMP))
add_taint(TAINT_UNSAFE_SMP);
valid_k7:
;
#endif
}
static void __cpuinit init_amd_k7(struct cpuinfo_x86 *c) static void __cpuinit init_amd_k7(struct cpuinfo_x86 *c)
{ {
u32 l, h; u32 l, h;
...@@ -175,6 +225,8 @@ static void __cpuinit init_amd_k7(struct cpuinfo_x86 *c) ...@@ -175,6 +225,8 @@ static void __cpuinit init_amd_k7(struct cpuinfo_x86 *c)
} }
set_cpu_cap(c, X86_FEATURE_K7); set_cpu_cap(c, X86_FEATURE_K7);
amd_k7_smp_check(c);
} }
#endif #endif
...@@ -450,7 +502,7 @@ static unsigned int __cpuinit amd_size_cache(struct cpuinfo_x86 *c, unsigned int ...@@ -450,7 +502,7 @@ static unsigned int __cpuinit amd_size_cache(struct cpuinfo_x86 *c, unsigned int
} }
#endif #endif
static struct cpu_dev amd_cpu_dev __cpuinitdata = { static const struct cpu_dev __cpuinitconst amd_cpu_dev = {
.c_vendor = "AMD", .c_vendor = "AMD",
.c_ident = { "AuthenticAMD" }, .c_ident = { "AuthenticAMD" },
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
......
#include <linux/bitops.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/bitops.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/msr.h>
#include <asm/e820.h> #include <asm/e820.h>
#include <asm/mtrr.h> #include <asm/mtrr.h>
#include <asm/msr.h>
#include "cpu.h" #include "cpu.h"
...@@ -276,7 +276,7 @@ static void __cpuinit init_c3(struct cpuinfo_x86 *c) ...@@ -276,7 +276,7 @@ static void __cpuinit init_c3(struct cpuinfo_x86 *c)
*/ */
c->x86_capability[5] = cpuid_edx(0xC0000001); c->x86_capability[5] = cpuid_edx(0xC0000001);
} }
#ifdef CONFIG_X86_32
/* Cyrix III family needs CX8 & PGE explicitly enabled. */ /* Cyrix III family needs CX8 & PGE explicitly enabled. */
if (c->x86_model >= 6 && c->x86_model <= 9) { if (c->x86_model >= 6 && c->x86_model <= 9) {
rdmsr(MSR_VIA_FCR, lo, hi); rdmsr(MSR_VIA_FCR, lo, hi);
...@@ -288,6 +288,11 @@ static void __cpuinit init_c3(struct cpuinfo_x86 *c) ...@@ -288,6 +288,11 @@ static void __cpuinit init_c3(struct cpuinfo_x86 *c)
/* Before Nehemiah, the C3's had 3dNOW! */ /* Before Nehemiah, the C3's had 3dNOW! */
if (c->x86_model >= 6 && c->x86_model < 9) if (c->x86_model >= 6 && c->x86_model < 9)
set_cpu_cap(c, X86_FEATURE_3DNOW); set_cpu_cap(c, X86_FEATURE_3DNOW);
#endif
if (c->x86 == 0x6 && c->x86_model >= 0xf) {
c->x86_cache_alignment = c->x86_clflush_size * 2;
set_cpu_cap(c, X86_FEATURE_REP_GOOD);
}
display_cacheinfo(c); display_cacheinfo(c);
} }
...@@ -316,16 +321,25 @@ enum { ...@@ -316,16 +321,25 @@ enum {
static void __cpuinit early_init_centaur(struct cpuinfo_x86 *c) static void __cpuinit early_init_centaur(struct cpuinfo_x86 *c)
{ {
switch (c->x86) { switch (c->x86) {
#ifdef CONFIG_X86_32
case 5: case 5:
/* Emulate MTRRs using Centaur's MCR. */ /* Emulate MTRRs using Centaur's MCR. */
set_cpu_cap(c, X86_FEATURE_CENTAUR_MCR); set_cpu_cap(c, X86_FEATURE_CENTAUR_MCR);
break; break;
#endif
case 6:
if (c->x86_model >= 0xf)
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
break;
} }
#ifdef CONFIG_X86_64
set_cpu_cap(c, X86_FEATURE_SYSENTER32);
#endif
} }
static void __cpuinit init_centaur(struct cpuinfo_x86 *c) static void __cpuinit init_centaur(struct cpuinfo_x86 *c)
{ {
#ifdef CONFIG_X86_32
char *name; char *name;
u32 fcr_set = 0; u32 fcr_set = 0;
u32 fcr_clr = 0; u32 fcr_clr = 0;
...@@ -337,8 +351,10 @@ static void __cpuinit init_centaur(struct cpuinfo_x86 *c) ...@@ -337,8 +351,10 @@ static void __cpuinit init_centaur(struct cpuinfo_x86 *c)
* 3DNow is IDd by bit 31 in extended CPUID (1*32+31) anyway * 3DNow is IDd by bit 31 in extended CPUID (1*32+31) anyway
*/ */
clear_cpu_cap(c, 0*32+31); clear_cpu_cap(c, 0*32+31);
#endif
early_init_centaur(c);
switch (c->x86) { switch (c->x86) {
#ifdef CONFIG_X86_32
case 5: case 5:
switch (c->x86_model) { switch (c->x86_model) {
case 4: case 4:
...@@ -442,16 +458,20 @@ static void __cpuinit init_centaur(struct cpuinfo_x86 *c) ...@@ -442,16 +458,20 @@ static void __cpuinit init_centaur(struct cpuinfo_x86 *c)
} }
sprintf(c->x86_model_id, "WinChip %s", name); sprintf(c->x86_model_id, "WinChip %s", name);
break; break;
#endif
case 6: case 6:
init_c3(c); init_c3(c);
break; break;
} }
#ifdef CONFIG_X86_64
set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
#endif
} }
static unsigned int __cpuinit static unsigned int __cpuinit
centaur_size_cache(struct cpuinfo_x86 *c, unsigned int size) centaur_size_cache(struct cpuinfo_x86 *c, unsigned int size)
{ {
#ifdef CONFIG_X86_32
/* VIA C3 CPUs (670-68F) need further shifting. */ /* VIA C3 CPUs (670-68F) need further shifting. */
if ((c->x86 == 6) && ((c->x86_model == 7) || (c->x86_model == 8))) if ((c->x86 == 6) && ((c->x86_model == 7) || (c->x86_model == 8)))
size >>= 8; size >>= 8;
...@@ -464,11 +484,11 @@ centaur_size_cache(struct cpuinfo_x86 *c, unsigned int size) ...@@ -464,11 +484,11 @@ centaur_size_cache(struct cpuinfo_x86 *c, unsigned int size)
if ((c->x86 == 6) && (c->x86_model == 9) && if ((c->x86 == 6) && (c->x86_model == 9) &&
(c->x86_mask == 1) && (size == 65)) (c->x86_mask == 1) && (size == 65))
size -= 1; size -= 1;
#endif
return size; return size;
} }
static struct cpu_dev centaur_cpu_dev __cpuinitdata = { static const struct cpu_dev __cpuinitconst centaur_cpu_dev = {
.c_vendor = "Centaur", .c_vendor = "Centaur",
.c_ident = { "CentaurHauls" }, .c_ident = { "CentaurHauls" },
.c_early_init = early_init_centaur, .c_early_init = early_init_centaur,
......
#include <linux/init.h>
#include <linux/smp.h>
#include <asm/cpufeature.h>
#include <asm/processor.h>
#include "cpu.h"
static void __cpuinit early_init_centaur(struct cpuinfo_x86 *c)
{
if (c->x86 == 0x6 && c->x86_model >= 0xf)
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
set_cpu_cap(c, X86_FEATURE_SYSENTER32);
}
static void __cpuinit init_centaur(struct cpuinfo_x86 *c)
{
early_init_centaur(c);
if (c->x86 == 0x6 && c->x86_model >= 0xf) {
c->x86_cache_alignment = c->x86_clflush_size * 2;
set_cpu_cap(c, X86_FEATURE_REP_GOOD);
}
set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
}
static struct cpu_dev centaur_cpu_dev __cpuinitdata = {
.c_vendor = "Centaur",
.c_ident = { "CentaurHauls" },
.c_early_init = early_init_centaur,
.c_init = init_centaur,
.c_x86_vendor = X86_VENDOR_CENTAUR,
};
cpu_dev_register(centaur_cpu_dev);
This diff is collapsed.
...@@ -5,31 +5,32 @@ ...@@ -5,31 +5,32 @@
struct cpu_model_info { struct cpu_model_info {
int vendor; int vendor;
int family; int family;
char *model_names[16]; const char *model_names[16];
}; };
/* attempt to consolidate cpu attributes */ /* attempt to consolidate cpu attributes */
struct cpu_dev { struct cpu_dev {
char * c_vendor; const char *c_vendor;
/* some have two possibilities for cpuid string */ /* some have two possibilities for cpuid string */
char * c_ident[2]; const char *c_ident[2];
struct cpu_model_info c_models[4]; struct cpu_model_info c_models[4];
void (*c_early_init)(struct cpuinfo_x86 *c); void (*c_early_init)(struct cpuinfo_x86 *);
void (*c_init)(struct cpuinfo_x86 * c); void (*c_init)(struct cpuinfo_x86 *);
void (*c_identify)(struct cpuinfo_x86 * c); void (*c_identify)(struct cpuinfo_x86 *);
unsigned int (*c_size_cache)(struct cpuinfo_x86 * c, unsigned int size); unsigned int (*c_size_cache)(struct cpuinfo_x86 *, unsigned int);
int c_x86_vendor; int c_x86_vendor;
}; };
#define cpu_dev_register(cpu_devX) \ #define cpu_dev_register(cpu_devX) \
static struct cpu_dev *__cpu_dev_##cpu_devX __used \ static const struct cpu_dev *const __cpu_dev_##cpu_devX __used \
__attribute__((__section__(".x86_cpu_dev.init"))) = \ __attribute__((__section__(".x86_cpu_dev.init"))) = \
&cpu_devX; &cpu_devX;
extern struct cpu_dev *__x86_cpu_dev_start[], *__x86_cpu_dev_end[]; extern const struct cpu_dev *const __x86_cpu_dev_start[],
*const __x86_cpu_dev_end[];
extern void display_cacheinfo(struct cpuinfo_x86 *c); extern void display_cacheinfo(struct cpuinfo_x86 *c);
......
This diff is collapsed.
...@@ -61,23 +61,23 @@ static void __cpuinit do_cyrix_devid(unsigned char *dir0, unsigned char *dir1) ...@@ -61,23 +61,23 @@ static void __cpuinit do_cyrix_devid(unsigned char *dir0, unsigned char *dir1)
*/ */
static unsigned char Cx86_dir0_msb __cpuinitdata = 0; static unsigned char Cx86_dir0_msb __cpuinitdata = 0;
static char Cx86_model[][9] __cpuinitdata = { static const char __cpuinitconst Cx86_model[][9] = {
"Cx486", "Cx486", "5x86 ", "6x86", "MediaGX ", "6x86MX ", "Cx486", "Cx486", "5x86 ", "6x86", "MediaGX ", "6x86MX ",
"M II ", "Unknown" "M II ", "Unknown"
}; };
static char Cx486_name[][5] __cpuinitdata = { static const char __cpuinitconst Cx486_name[][5] = {
"SLC", "DLC", "SLC2", "DLC2", "SRx", "DRx", "SLC", "DLC", "SLC2", "DLC2", "SRx", "DRx",
"SRx2", "DRx2" "SRx2", "DRx2"
}; };
static char Cx486S_name[][4] __cpuinitdata = { static const char __cpuinitconst Cx486S_name[][4] = {
"S", "S2", "Se", "S2e" "S", "S2", "Se", "S2e"
}; };
static char Cx486D_name[][4] __cpuinitdata = { static const char __cpuinitconst Cx486D_name[][4] = {
"DX", "DX2", "?", "?", "?", "DX4" "DX", "DX2", "?", "?", "?", "DX4"
}; };
static char Cx86_cb[] __cpuinitdata = "?.5x Core/Bus Clock"; static char Cx86_cb[] __cpuinitdata = "?.5x Core/Bus Clock";
static char cyrix_model_mult1[] __cpuinitdata = "12??43"; static const char __cpuinitconst cyrix_model_mult1[] = "12??43";
static char cyrix_model_mult2[] __cpuinitdata = "12233445"; static const char __cpuinitconst cyrix_model_mult2[] = "12233445";
/* /*
* Reset the slow-loop (SLOP) bit on the 686(L) which is set by some old * Reset the slow-loop (SLOP) bit on the 686(L) which is set by some old
...@@ -435,7 +435,7 @@ static void __cpuinit cyrix_identify(struct cpuinfo_x86 *c) ...@@ -435,7 +435,7 @@ static void __cpuinit cyrix_identify(struct cpuinfo_x86 *c)
} }
} }
static struct cpu_dev cyrix_cpu_dev __cpuinitdata = { static const struct cpu_dev __cpuinitconst cyrix_cpu_dev = {
.c_vendor = "Cyrix", .c_vendor = "Cyrix",
.c_ident = { "CyrixInstead" }, .c_ident = { "CyrixInstead" },
.c_early_init = early_init_cyrix, .c_early_init = early_init_cyrix,
...@@ -446,7 +446,7 @@ static struct cpu_dev cyrix_cpu_dev __cpuinitdata = { ...@@ -446,7 +446,7 @@ static struct cpu_dev cyrix_cpu_dev __cpuinitdata = {
cpu_dev_register(cyrix_cpu_dev); cpu_dev_register(cyrix_cpu_dev);
static struct cpu_dev nsc_cpu_dev __cpuinitdata = { static const struct cpu_dev __cpuinitconst nsc_cpu_dev = {
.c_vendor = "NSC", .c_vendor = "NSC",
.c_ident = { "Geode by NSC" }, .c_ident = { "Geode by NSC" },
.c_init = init_nsc, .c_init = init_nsc,
......
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/ds.h> #include <asm/ds.h>
#include <asm/bugs.h> #include <asm/bugs.h>
#include <asm/cpu.h>
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
#include <asm/topology.h> #include <asm/topology.h>
...@@ -54,6 +55,11 @@ static void __cpuinit early_init_intel(struct cpuinfo_x86 *c) ...@@ -54,6 +55,11 @@ static void __cpuinit early_init_intel(struct cpuinfo_x86 *c)
c->x86_cache_alignment = 128; c->x86_cache_alignment = 128;
#endif #endif
/* CPUID workaround for 0F33/0F34 CPU */
if (c->x86 == 0xF && c->x86_model == 0x3
&& (c->x86_mask == 0x3 || c->x86_mask == 0x4))
c->x86_phys_bits = 36;
/* /*
* c->x86_power is 8000_0007 edx. Bit 8 is TSC runs at constant rate * c->x86_power is 8000_0007 edx. Bit 8 is TSC runs at constant rate
* with P/T states and does not stop in deep C-states. * with P/T states and does not stop in deep C-states.
...@@ -116,6 +122,28 @@ static void __cpuinit trap_init_f00f_bug(void) ...@@ -116,6 +122,28 @@ static void __cpuinit trap_init_f00f_bug(void)
} }
#endif #endif
static void __cpuinit intel_smp_check(struct cpuinfo_x86 *c)
{
#ifdef CONFIG_SMP
/* calling is from identify_secondary_cpu() ? */
if (c->cpu_index == boot_cpu_id)
return;
/*
* Mask B, Pentium, but not Pentium MMX
*/
if (c->x86 == 5 &&
c->x86_mask >= 1 && c->x86_mask <= 4 &&
c->x86_model <= 3) {
/*
* Remember we have B step Pentia with bugs
*/
WARN_ONCE(1, "WARNING: SMP operation may be unreliable"
"with B stepping processors.\n");
}
#endif
}
static void __cpuinit intel_workarounds(struct cpuinfo_x86 *c) static void __cpuinit intel_workarounds(struct cpuinfo_x86 *c)
{ {
unsigned long lo, hi; unsigned long lo, hi;
...@@ -192,6 +220,8 @@ static void __cpuinit intel_workarounds(struct cpuinfo_x86 *c) ...@@ -192,6 +220,8 @@ static void __cpuinit intel_workarounds(struct cpuinfo_x86 *c)
#ifdef CONFIG_X86_NUMAQ #ifdef CONFIG_X86_NUMAQ
numaq_tsc_disable(); numaq_tsc_disable();
#endif #endif
intel_smp_check(c);
} }
#else #else
static void __cpuinit intel_workarounds(struct cpuinfo_x86 *c) static void __cpuinit intel_workarounds(struct cpuinfo_x86 *c)
...@@ -391,7 +421,7 @@ static unsigned int __cpuinit intel_size_cache(struct cpuinfo_x86 *c, unsigned i ...@@ -391,7 +421,7 @@ static unsigned int __cpuinit intel_size_cache(struct cpuinfo_x86 *c, unsigned i
} }
#endif #endif
static struct cpu_dev intel_cpu_dev __cpuinitdata = { static const struct cpu_dev __cpuinitconst intel_cpu_dev = {
.c_vendor = "Intel", .c_vendor = "Intel",
.c_ident = { "GenuineIntel" }, .c_ident = { "GenuineIntel" },
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
......
...@@ -32,7 +32,7 @@ struct _cache_table ...@@ -32,7 +32,7 @@ struct _cache_table
}; };
/* all the cache descriptor types we care about (no TLB or trace cache entries) */ /* all the cache descriptor types we care about (no TLB or trace cache entries) */
static struct _cache_table cache_table[] __cpuinitdata = static const struct _cache_table __cpuinitconst cache_table[] =
{ {
{ 0x06, LVL_1_INST, 8 }, /* 4-way set assoc, 32 byte line size */ { 0x06, LVL_1_INST, 8 }, /* 4-way set assoc, 32 byte line size */
{ 0x08, LVL_1_INST, 16 }, /* 4-way set assoc, 32 byte line size */ { 0x08, LVL_1_INST, 16 }, /* 4-way set assoc, 32 byte line size */
...@@ -206,15 +206,15 @@ union l3_cache { ...@@ -206,15 +206,15 @@ union l3_cache {
unsigned val; unsigned val;
}; };
static unsigned short assocs[] __cpuinitdata = { static const unsigned short __cpuinitconst assocs[] = {
[1] = 1, [2] = 2, [4] = 4, [6] = 8, [1] = 1, [2] = 2, [4] = 4, [6] = 8,
[8] = 16, [0xa] = 32, [0xb] = 48, [8] = 16, [0xa] = 32, [0xb] = 48,
[0xc] = 64, [0xc] = 64,
[0xf] = 0xffff // ?? [0xf] = 0xffff // ??
}; };
static unsigned char levels[] __cpuinitdata = { 1, 1, 2, 3 }; static const unsigned char __cpuinitconst levels[] = { 1, 1, 2, 3 };
static unsigned char types[] __cpuinitdata = { 1, 2, 3, 3 }; static const unsigned char __cpuinitconst types[] = { 1, 2, 3, 3 };
static void __cpuinit static void __cpuinit
amd_cpuid4(int leaf, union _cpuid4_leaf_eax *eax, amd_cpuid4(int leaf, union _cpuid4_leaf_eax *eax,
......
...@@ -4,3 +4,4 @@ obj-$(CONFIG_X86_32) += k7.o p4.o p5.o p6.o winchip.o ...@@ -4,3 +4,4 @@ obj-$(CONFIG_X86_32) += k7.o p4.o p5.o p6.o winchip.o
obj-$(CONFIG_X86_MCE_INTEL) += mce_intel_64.o obj-$(CONFIG_X86_MCE_INTEL) += mce_intel_64.o
obj-$(CONFIG_X86_MCE_AMD) += mce_amd_64.o obj-$(CONFIG_X86_MCE_AMD) += mce_amd_64.o
obj-$(CONFIG_X86_MCE_NONFATAL) += non-fatal.o obj-$(CONFIG_X86_MCE_NONFATAL) += non-fatal.o
obj-$(CONFIG_X86_MCE_THRESHOLD) += threshold.o
...@@ -60,20 +60,6 @@ void mcheck_init(struct cpuinfo_x86 *c) ...@@ -60,20 +60,6 @@ void mcheck_init(struct cpuinfo_x86 *c)
} }
} }
static unsigned long old_cr4 __initdata;
void __init stop_mce(void)
{
old_cr4 = read_cr4();
clear_in_cr4(X86_CR4_MCE);
}
void __init restart_mce(void)
{
if (old_cr4 & X86_CR4_MCE)
set_in_cr4(X86_CR4_MCE);
}
static int __init mcheck_disable(char *str) static int __init mcheck_disable(char *str)
{ {
mce_disabled = 1; mce_disabled = 1;
......
This diff is collapsed.
...@@ -79,6 +79,8 @@ static unsigned char shared_bank[NR_BANKS] = { ...@@ -79,6 +79,8 @@ static unsigned char shared_bank[NR_BANKS] = {
static DEFINE_PER_CPU(unsigned char, bank_map); /* see which banks are on */ static DEFINE_PER_CPU(unsigned char, bank_map); /* see which banks are on */
static void amd_threshold_interrupt(void);
/* /*
* CPU Initialization * CPU Initialization
*/ */
...@@ -90,7 +92,8 @@ struct thresh_restart { ...@@ -90,7 +92,8 @@ struct thresh_restart {
}; };
/* must be called with correct cpu affinity */ /* must be called with correct cpu affinity */
static long threshold_restart_bank(void *_tr) /* Called via smp_call_function_single() */
static void threshold_restart_bank(void *_tr)
{ {
struct thresh_restart *tr = _tr; struct thresh_restart *tr = _tr;
u32 mci_misc_hi, mci_misc_lo; u32 mci_misc_hi, mci_misc_lo;
...@@ -117,7 +120,6 @@ static long threshold_restart_bank(void *_tr) ...@@ -117,7 +120,6 @@ static long threshold_restart_bank(void *_tr)
mci_misc_hi |= MASK_COUNT_EN_HI; mci_misc_hi |= MASK_COUNT_EN_HI;
wrmsr(tr->b->address, mci_misc_lo, mci_misc_hi); wrmsr(tr->b->address, mci_misc_lo, mci_misc_hi);
return 0;
} }
/* cpu init entry point, called from mce.c with preempt off */ /* cpu init entry point, called from mce.c with preempt off */
...@@ -174,6 +176,8 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c) ...@@ -174,6 +176,8 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c)
tr.reset = 0; tr.reset = 0;
tr.old_limit = 0; tr.old_limit = 0;
threshold_restart_bank(&tr); threshold_restart_bank(&tr);
mce_threshold_vector = amd_threshold_interrupt;
} }
} }
} }
...@@ -187,19 +191,13 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c) ...@@ -187,19 +191,13 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c)
* the interrupt goes off when error_count reaches threshold_limit. * the interrupt goes off when error_count reaches threshold_limit.
* the handler will simply log mcelog w/ software defined bank number. * the handler will simply log mcelog w/ software defined bank number.
*/ */
asmlinkage void mce_threshold_interrupt(void) static void amd_threshold_interrupt(void)
{ {
unsigned int bank, block; unsigned int bank, block;
struct mce m; struct mce m;
u32 low = 0, high = 0, address = 0; u32 low = 0, high = 0, address = 0;
ack_APIC_irq(); mce_setup(&m);
exit_idle();
irq_enter();
memset(&m, 0, sizeof(m));
rdtscll(m.tsc);
m.cpu = smp_processor_id();
/* assume first bank caused it */ /* assume first bank caused it */
for (bank = 0; bank < NR_BANKS; ++bank) { for (bank = 0; bank < NR_BANKS; ++bank) {
...@@ -233,7 +231,8 @@ asmlinkage void mce_threshold_interrupt(void) ...@@ -233,7 +231,8 @@ asmlinkage void mce_threshold_interrupt(void)
/* Log the machine check that caused the threshold /* Log the machine check that caused the threshold
event. */ event. */
do_machine_check(NULL, 0); machine_check_poll(MCP_TIMESTAMP,
&__get_cpu_var(mce_poll_banks));
if (high & MASK_OVERFLOW_HI) { if (high & MASK_OVERFLOW_HI) {
rdmsrl(address, m.misc); rdmsrl(address, m.misc);
...@@ -243,13 +242,10 @@ asmlinkage void mce_threshold_interrupt(void) ...@@ -243,13 +242,10 @@ asmlinkage void mce_threshold_interrupt(void)
+ bank * NR_BLOCKS + bank * NR_BLOCKS
+ block; + block;
mce_log(&m); mce_log(&m);
goto out; return;
} }
} }
} }
out:
inc_irq_stat(irq_threshold_count);
irq_exit();
} }
/* /*
...@@ -283,7 +279,7 @@ static ssize_t store_interrupt_enable(struct threshold_block *b, ...@@ -283,7 +279,7 @@ static ssize_t store_interrupt_enable(struct threshold_block *b,
tr.b = b; tr.b = b;
tr.reset = 0; tr.reset = 0;
tr.old_limit = 0; tr.old_limit = 0;
work_on_cpu(b->cpu, threshold_restart_bank, &tr); smp_call_function_single(b->cpu, threshold_restart_bank, &tr, 1);
return end - buf; return end - buf;
} }
...@@ -305,23 +301,32 @@ static ssize_t store_threshold_limit(struct threshold_block *b, ...@@ -305,23 +301,32 @@ static ssize_t store_threshold_limit(struct threshold_block *b,
tr.b = b; tr.b = b;
tr.reset = 0; tr.reset = 0;
work_on_cpu(b->cpu, threshold_restart_bank, &tr); smp_call_function_single(b->cpu, threshold_restart_bank, &tr, 1);
return end - buf; return end - buf;
} }
static long local_error_count(void *_b) struct threshold_block_cross_cpu {
struct threshold_block *tb;
long retval;
};
static void local_error_count_handler(void *_tbcc)
{ {
struct threshold_block *b = _b; struct threshold_block_cross_cpu *tbcc = _tbcc;
struct threshold_block *b = tbcc->tb;
u32 low, high; u32 low, high;
rdmsr(b->address, low, high); rdmsr(b->address, low, high);
return (high & 0xFFF) - (THRESHOLD_MAX - b->threshold_limit); tbcc->retval = (high & 0xFFF) - (THRESHOLD_MAX - b->threshold_limit);
} }
static ssize_t show_error_count(struct threshold_block *b, char *buf) static ssize_t show_error_count(struct threshold_block *b, char *buf)
{ {
return sprintf(buf, "%lx\n", work_on_cpu(b->cpu, local_error_count, b)); struct threshold_block_cross_cpu tbcc = { .tb = b, };
smp_call_function_single(b->cpu, local_error_count_handler, &tbcc, 1);
return sprintf(buf, "%lx\n", tbcc.retval);
} }
static ssize_t store_error_count(struct threshold_block *b, static ssize_t store_error_count(struct threshold_block *b,
...@@ -329,7 +334,7 @@ static ssize_t store_error_count(struct threshold_block *b, ...@@ -329,7 +334,7 @@ static ssize_t store_error_count(struct threshold_block *b,
{ {
struct thresh_restart tr = { .b = b, .reset = 1, .old_limit = 0 }; struct thresh_restart tr = { .b = b, .reset = 1, .old_limit = 0 };
work_on_cpu(b->cpu, threshold_restart_bank, &tr); smp_call_function_single(b->cpu, threshold_restart_bank, &tr, 1);
return 1; return 1;
} }
...@@ -398,7 +403,7 @@ static __cpuinit int allocate_threshold_blocks(unsigned int cpu, ...@@ -398,7 +403,7 @@ static __cpuinit int allocate_threshold_blocks(unsigned int cpu,
if ((bank >= NR_BANKS) || (block >= NR_BLOCKS)) if ((bank >= NR_BANKS) || (block >= NR_BLOCKS))
return 0; return 0;
if (rdmsr_safe(address, &low, &high)) if (rdmsr_safe_on_cpu(cpu, address, &low, &high))
return 0; return 0;
if (!(high & MASK_VALID_HI)) { if (!(high & MASK_VALID_HI)) {
...@@ -462,12 +467,11 @@ static __cpuinit int allocate_threshold_blocks(unsigned int cpu, ...@@ -462,12 +467,11 @@ static __cpuinit int allocate_threshold_blocks(unsigned int cpu,
return err; return err;
} }
static __cpuinit long local_allocate_threshold_blocks(void *_bank) static __cpuinit long
local_allocate_threshold_blocks(int cpu, unsigned int bank)
{ {
unsigned int *bank = _bank; return allocate_threshold_blocks(cpu, bank, 0,
MSR_IA32_MC0_MISC + bank * 4);
return allocate_threshold_blocks(smp_processor_id(), *bank, 0,
MSR_IA32_MC0_MISC + *bank * 4);
} }
/* symlinks sibling shared banks to first core. first core owns dir/files. */ /* symlinks sibling shared banks to first core. first core owns dir/files. */
...@@ -530,7 +534,7 @@ static __cpuinit int threshold_create_bank(unsigned int cpu, unsigned int bank) ...@@ -530,7 +534,7 @@ static __cpuinit int threshold_create_bank(unsigned int cpu, unsigned int bank)
per_cpu(threshold_banks, cpu)[bank] = b; per_cpu(threshold_banks, cpu)[bank] = b;
err = work_on_cpu(cpu, local_allocate_threshold_blocks, &bank); err = local_allocate_threshold_blocks(cpu, bank);
if (err) if (err)
goto out_free; goto out_free;
......
/* /*
* Intel specific MCE features. * Intel specific MCE features.
* Copyright 2004 Zwane Mwaikambo <zwane@linuxpower.ca> * Copyright 2004 Zwane Mwaikambo <zwane@linuxpower.ca>
* Copyright (C) 2008, 2009 Intel Corporation
* Author: Andi Kleen
*/ */
#include <linux/init.h> #include <linux/init.h>
...@@ -13,6 +15,7 @@ ...@@ -13,6 +15,7 @@
#include <asm/hw_irq.h> #include <asm/hw_irq.h>
#include <asm/idle.h> #include <asm/idle.h>
#include <asm/therm_throt.h> #include <asm/therm_throt.h>
#include <asm/apic.h>
asmlinkage void smp_thermal_interrupt(void) asmlinkage void smp_thermal_interrupt(void)
{ {
...@@ -25,7 +28,7 @@ asmlinkage void smp_thermal_interrupt(void) ...@@ -25,7 +28,7 @@ asmlinkage void smp_thermal_interrupt(void)
rdmsrl(MSR_IA32_THERM_STATUS, msr_val); rdmsrl(MSR_IA32_THERM_STATUS, msr_val);
if (therm_throt_process(msr_val & 1)) if (therm_throt_process(msr_val & 1))
mce_log_therm_throt_event(smp_processor_id(), msr_val); mce_log_therm_throt_event(msr_val);
inc_irq_stat(irq_thermal_count); inc_irq_stat(irq_thermal_count);
irq_exit(); irq_exit();
...@@ -85,7 +88,209 @@ static void intel_init_thermal(struct cpuinfo_x86 *c) ...@@ -85,7 +88,209 @@ static void intel_init_thermal(struct cpuinfo_x86 *c)
return; return;
} }
/*
* Support for Intel Correct Machine Check Interrupts. This allows
* the CPU to raise an interrupt when a corrected machine check happened.
* Normally we pick those up using a regular polling timer.
* Also supports reliable discovery of shared banks.
*/
static DEFINE_PER_CPU(mce_banks_t, mce_banks_owned);
/*
* cmci_discover_lock protects against parallel discovery attempts
* which could race against each other.
*/
static DEFINE_SPINLOCK(cmci_discover_lock);
#define CMCI_THRESHOLD 1
static int cmci_supported(int *banks)
{
u64 cap;
/*
* Vendor check is not strictly needed, but the initial
* initialization is vendor keyed and this
* makes sure none of the backdoors are entered otherwise.
*/
if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
return 0;
if (!cpu_has_apic || lapic_get_maxlvt() < 6)
return 0;
rdmsrl(MSR_IA32_MCG_CAP, cap);
*banks = min_t(unsigned, MAX_NR_BANKS, cap & 0xff);
return !!(cap & MCG_CMCI_P);
}
/*
* The interrupt handler. This is called on every event.
* Just call the poller directly to log any events.
* This could in theory increase the threshold under high load,
* but doesn't for now.
*/
static void intel_threshold_interrupt(void)
{
machine_check_poll(MCP_TIMESTAMP, &__get_cpu_var(mce_banks_owned));
mce_notify_user();
}
static void print_update(char *type, int *hdr, int num)
{
if (*hdr == 0)
printk(KERN_INFO "CPU %d MCA banks", smp_processor_id());
*hdr = 1;
printk(KERN_CONT " %s:%d", type, num);
}
/*
* Enable CMCI (Corrected Machine Check Interrupt) for available MCE banks
* on this CPU. Use the algorithm recommended in the SDM to discover shared
* banks.
*/
static void cmci_discover(int banks, int boot)
{
unsigned long *owned = (void *)&__get_cpu_var(mce_banks_owned);
int hdr = 0;
int i;
spin_lock(&cmci_discover_lock);
for (i = 0; i < banks; i++) {
u64 val;
if (test_bit(i, owned))
continue;
rdmsrl(MSR_IA32_MC0_CTL2 + i, val);
/* Already owned by someone else? */
if (val & CMCI_EN) {
if (test_and_clear_bit(i, owned) || boot)
print_update("SHD", &hdr, i);
__clear_bit(i, __get_cpu_var(mce_poll_banks));
continue;
}
val |= CMCI_EN | CMCI_THRESHOLD;
wrmsrl(MSR_IA32_MC0_CTL2 + i, val);
rdmsrl(MSR_IA32_MC0_CTL2 + i, val);
/* Did the enable bit stick? -- the bank supports CMCI */
if (val & CMCI_EN) {
if (!test_and_set_bit(i, owned) || boot)
print_update("CMCI", &hdr, i);
__clear_bit(i, __get_cpu_var(mce_poll_banks));
} else {
WARN_ON(!test_bit(i, __get_cpu_var(mce_poll_banks)));
}
}
spin_unlock(&cmci_discover_lock);
if (hdr)
printk(KERN_CONT "\n");
}
/*
* Just in case we missed an event during initialization check
* all the CMCI owned banks.
*/
void cmci_recheck(void)
{
unsigned long flags;
int banks;
if (!mce_available(&current_cpu_data) || !cmci_supported(&banks))
return;
local_irq_save(flags);
machine_check_poll(MCP_TIMESTAMP, &__get_cpu_var(mce_banks_owned));
local_irq_restore(flags);
}
/*
* Disable CMCI on this CPU for all banks it owns when it goes down.
* This allows other CPUs to claim the banks on rediscovery.
*/
void cmci_clear(void)
{
int i;
int banks;
u64 val;
if (!cmci_supported(&banks))
return;
spin_lock(&cmci_discover_lock);
for (i = 0; i < banks; i++) {
if (!test_bit(i, __get_cpu_var(mce_banks_owned)))
continue;
/* Disable CMCI */
rdmsrl(MSR_IA32_MC0_CTL2 + i, val);
val &= ~(CMCI_EN|CMCI_THRESHOLD_MASK);
wrmsrl(MSR_IA32_MC0_CTL2 + i, val);
__clear_bit(i, __get_cpu_var(mce_banks_owned));
}
spin_unlock(&cmci_discover_lock);
}
/*
* After a CPU went down cycle through all the others and rediscover
* Must run in process context.
*/
void cmci_rediscover(int dying)
{
int banks;
int cpu;
cpumask_var_t old;
if (!cmci_supported(&banks))
return;
if (!alloc_cpumask_var(&old, GFP_KERNEL))
return;
cpumask_copy(old, &current->cpus_allowed);
for_each_online_cpu (cpu) {
if (cpu == dying)
continue;
if (set_cpus_allowed_ptr(current, &cpumask_of_cpu(cpu)))
continue;
/* Recheck banks in case CPUs don't all have the same */
if (cmci_supported(&banks))
cmci_discover(banks, 0);
}
set_cpus_allowed_ptr(current, old);
free_cpumask_var(old);
}
/*
* Reenable CMCI on this CPU in case a CPU down failed.
*/
void cmci_reenable(void)
{
int banks;
if (cmci_supported(&banks))
cmci_discover(banks, 0);
}
static void intel_init_cmci(void)
{
int banks;
if (!cmci_supported(&banks))
return;
mce_threshold_vector = intel_threshold_interrupt;
cmci_discover(banks, 1);
/*
* For CPU #0 this runs with still disabled APIC, but that's
* ok because only the vector is set up. We still do another
* check for the banks later for CPU #0 just to make sure
* to not miss any events.
*/
apic_write(APIC_LVTCMCI, THRESHOLD_APIC_VECTOR|APIC_DM_FIXED);
cmci_recheck();
}
void mce_intel_feature_init(struct cpuinfo_x86 *c) void mce_intel_feature_init(struct cpuinfo_x86 *c)
{ {
intel_init_thermal(c); intel_init_thermal(c);
intel_init_cmci();
} }
/*
* Common corrected MCE threshold handler code:
*/
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <asm/irq_vectors.h>
#include <asm/apic.h>
#include <asm/idle.h>
#include <asm/mce.h>
static void default_threshold_interrupt(void)
{
printk(KERN_ERR "Unexpected threshold interrupt at vector %x\n",
THRESHOLD_APIC_VECTOR);
}
void (*mce_threshold_vector)(void) = default_threshold_interrupt;
asmlinkage void mce_threshold_interrupt(void)
{
exit_idle();
irq_enter();
inc_irq_stat(irq_threshold_count);
mce_threshold_vector();
irq_exit();
/* Ack only at the end to avoid potential reentry */
ack_APIC_irq();
}
obj-y := main.o if.o generic.o state.o obj-y := main.o if.o generic.o state.o cleanup.o
obj-$(CONFIG_X86_32) += amd.o cyrix.o centaur.o obj-$(CONFIG_X86_32) += amd.o cyrix.o centaur.o
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -79,6 +79,7 @@ extern struct mtrr_ops * mtrr_if; ...@@ -79,6 +79,7 @@ extern struct mtrr_ops * mtrr_if;
extern unsigned int num_var_ranges; extern unsigned int num_var_ranges;
extern u64 mtrr_tom2; extern u64 mtrr_tom2;
extern struct mtrr_state_type mtrr_state;
void mtrr_state_warn(void); void mtrr_state_warn(void);
const char *mtrr_attrib_to_str(int x); const char *mtrr_attrib_to_str(int x);
...@@ -88,3 +89,6 @@ void mtrr_wrmsr(unsigned, unsigned, unsigned); ...@@ -88,3 +89,6 @@ void mtrr_wrmsr(unsigned, unsigned, unsigned);
int amd_init_mtrr(void); int amd_init_mtrr(void);
int cyrix_init_mtrr(void); int cyrix_init_mtrr(void);
int centaur_init_mtrr(void); int centaur_init_mtrr(void);
extern int changed_by_mtrr_cleanup;
extern int mtrr_cleanup(unsigned address_bits);
...@@ -98,7 +98,7 @@ static void __cpuinit init_transmeta(struct cpuinfo_x86 *c) ...@@ -98,7 +98,7 @@ static void __cpuinit init_transmeta(struct cpuinfo_x86 *c)
#endif #endif
} }
static struct cpu_dev transmeta_cpu_dev __cpuinitdata = { static const struct cpu_dev __cpuinitconst transmeta_cpu_dev = {
.c_vendor = "Transmeta", .c_vendor = "Transmeta",
.c_ident = { "GenuineTMx86", "TransmetaCPU" }, .c_ident = { "GenuineTMx86", "TransmetaCPU" },
.c_early_init = early_init_transmeta, .c_early_init = early_init_transmeta,
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
* so no special init takes place. * so no special init takes place.
*/ */
static struct cpu_dev umc_cpu_dev __cpuinitdata = { static const struct cpu_dev __cpuinitconst umc_cpu_dev = {
.c_vendor = "UMC", .c_vendor = "UMC",
.c_ident = { "UMC UMC UMC" }, .c_ident = { "UMC UMC UMC" },
.c_models = { .c_models = {
......
This diff is collapsed.
...@@ -250,7 +250,7 @@ static int dbgp_wait_until_complete(void) ...@@ -250,7 +250,7 @@ static int dbgp_wait_until_complete(void)
return (ctrl & DBGP_ERROR) ? -DBGP_ERRCODE(ctrl) : DBGP_LEN(ctrl); return (ctrl & DBGP_ERROR) ? -DBGP_ERRCODE(ctrl) : DBGP_LEN(ctrl);
} }
static void dbgp_mdelay(int ms) static void __init dbgp_mdelay(int ms)
{ {
int i; int i;
...@@ -311,7 +311,7 @@ static void dbgp_set_data(const void *buf, int size) ...@@ -311,7 +311,7 @@ static void dbgp_set_data(const void *buf, int size)
writel(hi, &ehci_debug->data47); writel(hi, &ehci_debug->data47);
} }
static void dbgp_get_data(void *buf, int size) static void __init dbgp_get_data(void *buf, int size)
{ {
unsigned char *bytes = buf; unsigned char *bytes = buf;
u32 lo, hi; u32 lo, hi;
...@@ -355,7 +355,7 @@ static int dbgp_bulk_write(unsigned devnum, unsigned endpoint, ...@@ -355,7 +355,7 @@ static int dbgp_bulk_write(unsigned devnum, unsigned endpoint,
return ret; return ret;
} }
static int dbgp_bulk_read(unsigned devnum, unsigned endpoint, void *data, static int __init dbgp_bulk_read(unsigned devnum, unsigned endpoint, void *data,
int size) int size)
{ {
u32 pids, addr, ctrl; u32 pids, addr, ctrl;
...@@ -386,8 +386,8 @@ static int dbgp_bulk_read(unsigned devnum, unsigned endpoint, void *data, ...@@ -386,8 +386,8 @@ static int dbgp_bulk_read(unsigned devnum, unsigned endpoint, void *data,
return ret; return ret;
} }
static int dbgp_control_msg(unsigned devnum, int requesttype, int request, static int __init dbgp_control_msg(unsigned devnum, int requesttype,
int value, int index, void *data, int size) int request, int value, int index, void *data, int size)
{ {
u32 pids, addr, ctrl; u32 pids, addr, ctrl;
struct usb_ctrlrequest req; struct usb_ctrlrequest req;
...@@ -489,7 +489,7 @@ static u32 __init find_dbgp(int ehci_num, u32 *rbus, u32 *rslot, u32 *rfunc) ...@@ -489,7 +489,7 @@ static u32 __init find_dbgp(int ehci_num, u32 *rbus, u32 *rslot, u32 *rfunc)
return 0; return 0;
} }
static int ehci_reset_port(int port) static int __init ehci_reset_port(int port)
{ {
u32 portsc; u32 portsc;
u32 delay_time, delay; u32 delay_time, delay;
...@@ -532,7 +532,7 @@ static int ehci_reset_port(int port) ...@@ -532,7 +532,7 @@ static int ehci_reset_port(int port)
return -EBUSY; return -EBUSY;
} }
static int ehci_wait_for_port(int port) static int __init ehci_wait_for_port(int port)
{ {
u32 status; u32 status;
int ret, reps; int ret, reps;
...@@ -557,13 +557,13 @@ static inline void dbgp_printk(const char *fmt, ...) { } ...@@ -557,13 +557,13 @@ static inline void dbgp_printk(const char *fmt, ...) { }
typedef void (*set_debug_port_t)(int port); typedef void (*set_debug_port_t)(int port);
static void default_set_debug_port(int port) static void __init default_set_debug_port(int port)
{ {
} }
static set_debug_port_t set_debug_port = default_set_debug_port; static set_debug_port_t __initdata set_debug_port = default_set_debug_port;
static void nvidia_set_debug_port(int port) static void __init nvidia_set_debug_port(int port)
{ {
u32 dword; u32 dword;
dword = read_pci_config(ehci_dev.bus, ehci_dev.slot, ehci_dev.func, dword = read_pci_config(ehci_dev.bus, ehci_dev.slot, ehci_dev.func,
......
...@@ -442,8 +442,7 @@ sysenter_past_esp: ...@@ -442,8 +442,7 @@ sysenter_past_esp:
GET_THREAD_INFO(%ebp) GET_THREAD_INFO(%ebp)
/* Note, _TIF_SECCOMP is bit number 8, and so it needs testw and not testb */ testl $_TIF_WORK_SYSCALL_ENTRY,TI_flags(%ebp)
testw $_TIF_WORK_SYSCALL_ENTRY,TI_flags(%ebp)
jnz sysenter_audit jnz sysenter_audit
sysenter_do_call: sysenter_do_call:
cmpl $(nr_syscalls), %eax cmpl $(nr_syscalls), %eax
...@@ -454,7 +453,7 @@ sysenter_do_call: ...@@ -454,7 +453,7 @@ sysenter_do_call:
DISABLE_INTERRUPTS(CLBR_ANY) DISABLE_INTERRUPTS(CLBR_ANY)
TRACE_IRQS_OFF TRACE_IRQS_OFF
movl TI_flags(%ebp), %ecx movl TI_flags(%ebp), %ecx
testw $_TIF_ALLWORK_MASK, %cx testl $_TIF_ALLWORK_MASK, %ecx
jne sysexit_audit jne sysexit_audit
sysenter_exit: sysenter_exit:
/* if something modifies registers it must also disable sysexit */ /* if something modifies registers it must also disable sysexit */
...@@ -468,7 +467,7 @@ sysenter_exit: ...@@ -468,7 +467,7 @@ sysenter_exit:
#ifdef CONFIG_AUDITSYSCALL #ifdef CONFIG_AUDITSYSCALL
sysenter_audit: sysenter_audit:
testw $(_TIF_WORK_SYSCALL_ENTRY & ~_TIF_SYSCALL_AUDIT),TI_flags(%ebp) testl $(_TIF_WORK_SYSCALL_ENTRY & ~_TIF_SYSCALL_AUDIT),TI_flags(%ebp)
jnz syscall_trace_entry jnz syscall_trace_entry
addl $4,%esp addl $4,%esp
CFI_ADJUST_CFA_OFFSET -4 CFI_ADJUST_CFA_OFFSET -4
...@@ -485,7 +484,7 @@ sysenter_audit: ...@@ -485,7 +484,7 @@ sysenter_audit:
jmp sysenter_do_call jmp sysenter_do_call
sysexit_audit: sysexit_audit:
testw $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT), %cx testl $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT), %ecx
jne syscall_exit_work jne syscall_exit_work
TRACE_IRQS_ON TRACE_IRQS_ON
ENABLE_INTERRUPTS(CLBR_ANY) ENABLE_INTERRUPTS(CLBR_ANY)
...@@ -498,7 +497,7 @@ sysexit_audit: ...@@ -498,7 +497,7 @@ sysexit_audit:
DISABLE_INTERRUPTS(CLBR_ANY) DISABLE_INTERRUPTS(CLBR_ANY)
TRACE_IRQS_OFF TRACE_IRQS_OFF
movl TI_flags(%ebp), %ecx movl TI_flags(%ebp), %ecx
testw $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT), %cx testl $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT), %ecx
jne syscall_exit_work jne syscall_exit_work
movl PT_EAX(%esp),%eax /* reload syscall return value */ movl PT_EAX(%esp),%eax /* reload syscall return value */
jmp sysenter_exit jmp sysenter_exit
...@@ -523,8 +522,7 @@ ENTRY(system_call) ...@@ -523,8 +522,7 @@ ENTRY(system_call)
SAVE_ALL SAVE_ALL
GET_THREAD_INFO(%ebp) GET_THREAD_INFO(%ebp)
# system call tracing in operation / emulation # system call tracing in operation / emulation
/* Note, _TIF_SECCOMP is bit number 8, and so it needs testw and not testb */ testl $_TIF_WORK_SYSCALL_ENTRY,TI_flags(%ebp)
testw $_TIF_WORK_SYSCALL_ENTRY,TI_flags(%ebp)
jnz syscall_trace_entry jnz syscall_trace_entry
cmpl $(nr_syscalls), %eax cmpl $(nr_syscalls), %eax
jae syscall_badsys jae syscall_badsys
...@@ -538,7 +536,7 @@ syscall_exit: ...@@ -538,7 +536,7 @@ syscall_exit:
# between sampling and the iret # between sampling and the iret
TRACE_IRQS_OFF TRACE_IRQS_OFF
movl TI_flags(%ebp), %ecx movl TI_flags(%ebp), %ecx
testw $_TIF_ALLWORK_MASK, %cx # current->work testl $_TIF_ALLWORK_MASK, %ecx # current->work
jne syscall_exit_work jne syscall_exit_work
restore_all: restore_all:
...@@ -673,7 +671,7 @@ END(syscall_trace_entry) ...@@ -673,7 +671,7 @@ END(syscall_trace_entry)
# perform syscall exit tracing # perform syscall exit tracing
ALIGN ALIGN
syscall_exit_work: syscall_exit_work:
testb $_TIF_WORK_SYSCALL_EXIT, %cl testl $_TIF_WORK_SYSCALL_EXIT, %ecx
jz work_pending jz work_pending
TRACE_IRQS_ON TRACE_IRQS_ON
ENABLE_INTERRUPTS(CLBR_ANY) # could let syscall_trace_leave() call ENABLE_INTERRUPTS(CLBR_ANY) # could let syscall_trace_leave() call
......
...@@ -368,6 +368,7 @@ ENTRY(save_rest) ...@@ -368,6 +368,7 @@ ENTRY(save_rest)
END(save_rest) END(save_rest)
/* save complete stack frame */ /* save complete stack frame */
.pushsection .kprobes.text, "ax"
ENTRY(save_paranoid) ENTRY(save_paranoid)
XCPT_FRAME 1 RDI+8 XCPT_FRAME 1 RDI+8
cld cld
...@@ -396,6 +397,7 @@ ENTRY(save_paranoid) ...@@ -396,6 +397,7 @@ ENTRY(save_paranoid)
1: ret 1: ret
CFI_ENDPROC CFI_ENDPROC
END(save_paranoid) END(save_paranoid)
.popsection
/* /*
* A newly forked process directly context switches into this address. * A newly forked process directly context switches into this address.
...@@ -416,7 +418,6 @@ ENTRY(ret_from_fork) ...@@ -416,7 +418,6 @@ ENTRY(ret_from_fork)
GET_THREAD_INFO(%rcx) GET_THREAD_INFO(%rcx)
CFI_REMEMBER_STATE
RESTORE_REST RESTORE_REST
testl $3, CS-ARGOFFSET(%rsp) # from kernel_thread? testl $3, CS-ARGOFFSET(%rsp) # from kernel_thread?
...@@ -428,7 +429,6 @@ ENTRY(ret_from_fork) ...@@ -428,7 +429,6 @@ ENTRY(ret_from_fork)
RESTORE_TOP_OF_STACK %rdi, -ARGOFFSET RESTORE_TOP_OF_STACK %rdi, -ARGOFFSET
jmp ret_from_sys_call # go to the SYSRET fastpath jmp ret_from_sys_call # go to the SYSRET fastpath
CFI_RESTORE_STATE
CFI_ENDPROC CFI_ENDPROC
END(ret_from_fork) END(ret_from_fork)
...@@ -984,6 +984,8 @@ apicinterrupt UV_BAU_MESSAGE \ ...@@ -984,6 +984,8 @@ apicinterrupt UV_BAU_MESSAGE \
#endif #endif
apicinterrupt LOCAL_TIMER_VECTOR \ apicinterrupt LOCAL_TIMER_VECTOR \
apic_timer_interrupt smp_apic_timer_interrupt apic_timer_interrupt smp_apic_timer_interrupt
apicinterrupt GENERIC_INTERRUPT_VECTOR \
generic_interrupt smp_generic_interrupt
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
apicinterrupt INVALIDATE_TLB_VECTOR_START+0 \ apicinterrupt INVALIDATE_TLB_VECTOR_START+0 \
......
...@@ -18,7 +18,7 @@ void __init i386_start_kernel(void) ...@@ -18,7 +18,7 @@ void __init i386_start_kernel(void)
{ {
reserve_trampoline_memory(); reserve_trampoline_memory();
reserve_early(__pa_symbol(&_text), __pa_symbol(&_end), "TEXT DATA BSS"); reserve_early(__pa_symbol(&_text), __pa_symbol(&__bss_stop), "TEXT DATA BSS");
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
/* Reserve INITRD */ /* Reserve INITRD */
...@@ -29,9 +29,6 @@ void __init i386_start_kernel(void) ...@@ -29,9 +29,6 @@ void __init i386_start_kernel(void)
reserve_early(ramdisk_image, ramdisk_end, "RAMDISK"); reserve_early(ramdisk_image, ramdisk_end, "RAMDISK");
} }
#endif #endif
reserve_early(init_pg_tables_start, init_pg_tables_end,
"INIT_PG_TABLE");
reserve_ebda_region(); reserve_ebda_region();
/* /*
......
...@@ -100,7 +100,7 @@ void __init x86_64_start_reservations(char *real_mode_data) ...@@ -100,7 +100,7 @@ void __init x86_64_start_reservations(char *real_mode_data)
reserve_trampoline_memory(); reserve_trampoline_memory();
reserve_early(__pa_symbol(&_text), __pa_symbol(&_end), "TEXT DATA BSS"); reserve_early(__pa_symbol(&_text), __pa_symbol(&__bss_stop), "TEXT DATA BSS");
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
/* Reserve INITRD */ /* Reserve INITRD */
......
This diff is collapsed.
This diff is collapsed.
...@@ -7,10 +7,10 @@ ...@@ -7,10 +7,10 @@
*/ */
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/init.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/init.h>
#include <linux/dmi.h> #include <linux/dmi.h>
#include <asm/io.h> #include <linux/io.h>
int io_delay_type __read_mostly = CONFIG_DEFAULT_IO_DELAY_TYPE; int io_delay_type __read_mostly = CONFIG_DEFAULT_IO_DELAY_TYPE;
...@@ -47,8 +47,7 @@ EXPORT_SYMBOL(native_io_delay); ...@@ -47,8 +47,7 @@ EXPORT_SYMBOL(native_io_delay);
static int __init dmi_io_delay_0xed_port(const struct dmi_system_id *id) static int __init dmi_io_delay_0xed_port(const struct dmi_system_id *id)
{ {
if (io_delay_type == CONFIG_IO_DELAY_TYPE_0X80) { if (io_delay_type == CONFIG_IO_DELAY_TYPE_0X80) {
printk(KERN_NOTICE "%s: using 0xed I/O delay port\n", pr_notice("%s: using 0xed I/O delay port\n", id->ident);
id->ident);
io_delay_type = CONFIG_IO_DELAY_TYPE_0XED; io_delay_type = CONFIG_IO_DELAY_TYPE_0XED;
} }
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment