Commit 8daf14cf authored by Ingo Molnar's avatar Ingo Molnar

Merge branches 'x86/xen', 'x86/build', 'x86/microcode', 'x86/mm-debug-v2',...

Merge branches 'x86/xen', 'x86/build', 'x86/microcode', 'x86/mm-debug-v2', 'x86/memory-corruption-check', 'x86/early-printk', 'x86/xsave', 'x86/ptrace-v2', 'x86/quirks', 'x86/setup', 'x86/spinlocks' and 'x86/signal' into x86/core-v2
...@@ -658,11 +658,12 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -658,11 +658,12 @@ and is between 256 and 4096 characters. It is defined in the file
earlyprintk= [X86-32,X86-64,SH,BLACKFIN] earlyprintk= [X86-32,X86-64,SH,BLACKFIN]
earlyprintk=vga earlyprintk=vga
earlyprintk=serial[,ttySn[,baudrate]] earlyprintk=serial[,ttySn[,baudrate]]
earlyprintk=dbgp
Append ",keep" to not disable it when the real console Append ",keep" to not disable it when the real console
takes over. takes over.
Only vga or serial at a time, not both. Only vga or serial or usb debug port at a time.
Currently only ttyS0 and ttyS1 are supported. Currently only ttyS0 and ttyS1 are supported.
...@@ -1231,6 +1232,29 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -1231,6 +1232,29 @@ and is between 256 and 4096 characters. It is defined in the file
or or
memmap=0x10000$0x18690000 memmap=0x10000$0x18690000
memory_corruption_check=0/1 [X86]
Some BIOSes seem to corrupt the first 64k of
memory when doing things like suspend/resume.
Setting this option will scan the memory
looking for corruption. Enabling this will
both detect corruption and prevent the kernel
from using the memory being corrupted.
However, its intended as a diagnostic tool; if
repeatable BIOS-originated corruption always
affects the same memory, you can use memmap=
to prevent the kernel from using that memory.
memory_corruption_check_size=size [X86]
By default it checks for corruption in the low
64k, making this memory unavailable for normal
use. Use this parameter to scan for
corruption in more or less memory.
memory_corruption_check_period=seconds [X86]
By default it checks for corruption every 60
seconds. Use this parameter to check at some
other rate. 0 disables periodic checking.
memtest= [KNL,X86] Enable memtest memtest= [KNL,X86] Enable memtest
Format: <integer> Format: <integer>
range: 0,4 : pattern number range: 0,4 : pattern number
......
...@@ -390,6 +390,11 @@ L: iommu@lists.linux-foundation.org ...@@ -390,6 +390,11 @@ L: iommu@lists.linux-foundation.org
T: git://git.kernel.org/pub/scm/linux/kernel/git/joro/linux-2.6-iommu.git T: git://git.kernel.org/pub/scm/linux/kernel/git/joro/linux-2.6-iommu.git
S: Supported S: Supported
AMD MICROCODE UPDATE SUPPORT
P: Peter Oruba
M: peter.oruba@amd.com
S: Supported
AMS (Apple Motion Sensor) DRIVER AMS (Apple Motion Sensor) DRIVER
P: Stelian Pop P: Stelian Pop
M: stelian@popies.net M: stelian@popies.net
......
...@@ -113,11 +113,6 @@ typedef struct siginfo { ...@@ -113,11 +113,6 @@ typedef struct siginfo {
#undef NSIGSEGV #undef NSIGSEGV
#define NSIGSEGV 3 #define NSIGSEGV 3
/*
* SIGTRAP si_codes
*/
#define TRAP_BRANCH (__SI_FAULT|3) /* process taken branch trap */
#define TRAP_HWBKPT (__SI_FAULT|4) /* hardware breakpoint or watchpoint */
#undef NSIGTRAP #undef NSIGTRAP
#define NSIGTRAP 4 #define NSIGTRAP 4
......
...@@ -15,11 +15,6 @@ ...@@ -15,11 +15,6 @@
#include <asm-generic/siginfo.h> #include <asm-generic/siginfo.h>
/*
* SIGTRAP si_codes
*/
#define TRAP_BRANCH (__SI_FAULT|3) /* process taken branch trap */
#define TRAP_HWBKPT (__SI_FAULT|4) /* hardware breakpoint or watchpoint */
#undef NSIGTRAP #undef NSIGTRAP
#define NSIGTRAP 4 #define NSIGTRAP 4
......
...@@ -778,23 +778,45 @@ config X86_REBOOTFIXUPS ...@@ -778,23 +778,45 @@ config X86_REBOOTFIXUPS
Say N otherwise. Say N otherwise.
config MICROCODE config MICROCODE
tristate "/dev/cpu/microcode - Intel IA32 CPU microcode support" tristate "/dev/cpu/microcode - microcode support"
select FW_LOADER select FW_LOADER
---help--- ---help---
If you say Y here, you will be able to update the microcode on If you say Y here, you will be able to update the microcode on
Intel processors in the IA32 family, e.g. Pentium Pro, Pentium II, certain Intel and AMD processors. The Intel support is for the
Pentium III, Pentium 4, Xeon etc. You will obviously need the IA32 family, e.g. Pentium Pro, Pentium II, Pentium III,
actual microcode binary data itself which is not shipped with the Pentium 4, Xeon etc. The AMD support is for family 0x10 and
Linux kernel. 0x11 processors, e.g. Opteron, Phenom and Turion 64 Ultra.
You will obviously need the actual microcode binary data itself
which is not shipped with the Linux kernel.
For latest news and information on obtaining all the required This option selects the general module only, you need to select
ingredients for this driver, check: at least one vendor specific module as well.
<http://www.urbanmyth.org/microcode/>.
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called microcode. module will be called microcode.
config MICROCODE_OLD_INTERFACE config MICROCODE_INTEL
bool "Intel microcode patch loading support"
depends on MICROCODE
default MICROCODE
select FW_LOADER
--help---
This options enables microcode patch loading support for Intel
processors.
For latest news and information on obtaining all the required
Intel ingredients for this driver, check:
<http://www.urbanmyth.org/microcode/>.
config MICROCODE_AMD
bool "AMD microcode patch loading support"
depends on MICROCODE
select FW_LOADER
--help---
If you select this option, microcode patch loading support for AMD
processors will be enabled.
config MICROCODE_OLD_INTERFACE
def_bool y def_bool y
depends on MICROCODE depends on MICROCODE
...@@ -1061,6 +1083,56 @@ config HIGHPTE ...@@ -1061,6 +1083,56 @@ config HIGHPTE
low memory. Setting this option will put user-space page table low memory. Setting this option will put user-space page table
entries in high memory. entries in high memory.
config X86_CHECK_BIOS_CORRUPTION
bool "Check for low memory corruption"
help
Periodically check for memory corruption in low memory, which
is suspected to be caused by BIOS. Even when enabled in the
configuration, it is disabled at runtime. Enable it by
setting "memory_corruption_check=1" on the kernel command
line. By default it scans the low 64k of memory every 60
seconds; see the memory_corruption_check_size and
memory_corruption_check_period parameters in
Documentation/kernel-parameters.txt to adjust this.
When enabled with the default parameters, this option has
almost no overhead, as it reserves a relatively small amount
of memory and scans it infrequently. It both detects corruption
and prevents it from affecting the running system.
It is, however, intended as a diagnostic tool; if repeatable
BIOS-originated corruption always affects the same memory,
you can use memmap= to prevent the kernel from using that
memory.
config X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK
bool "Set the default setting of memory_corruption_check"
depends on X86_CHECK_BIOS_CORRUPTION
default y
help
Set whether the default state of memory_corruption_check is
on or off.
config X86_RESERVE_LOW_64K
bool "Reserve low 64K of RAM on AMI/Phoenix BIOSen"
default y
help
Reserve the first 64K of physical RAM on BIOSes that are known
to potentially corrupt that memory range. A numbers of BIOSes are
known to utilize this area during suspend/resume, so it must not
be used by the kernel.
Set this to N if you are absolutely sure that you trust the BIOS
to get all its memory reservations and usages right.
If you have doubts about the BIOS (e.g. suspend/resume does not
work or there's kernel crashes after certain hardware hotplug
events) and it's not AMI or Phoenix, then you might want to enable
X86_CHECK_BIOS_CORRUPTION=y to allow the kernel to check typical
corruption patterns.
Say Y if unsure.
config MATH_EMULATION config MATH_EMULATION
bool bool
prompt "Math emulation" if X86_32 prompt "Math emulation" if X86_32
......
...@@ -43,6 +43,19 @@ config EARLY_PRINTK ...@@ -43,6 +43,19 @@ config EARLY_PRINTK
with klogd/syslogd or the X server. You should normally N here, with klogd/syslogd or the X server. You should normally N here,
unless you want to debug such a crash. unless you want to debug such a crash.
config EARLY_PRINTK_DBGP
bool "Early printk via EHCI debug port"
default n
depends on EARLY_PRINTK && PCI
help
Write kernel log output directly into the EHCI debug port.
This is useful for kernel debugging when your machine crashes very
early before the console code is initialized. For normal operation
it is not recommended because it looks ugly and doesn't cooperate
with klogd/syslogd or the X server. You should normally N here,
unless you want to debug such a crash. You need usb debug device.
config DEBUG_STACKOVERFLOW config DEBUG_STACKOVERFLOW
bool "Check for stack overflows" bool "Check for stack overflows"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
......
...@@ -45,3 +45,8 @@ cflags-$(CONFIG_MGEODEGX1) += -march=pentium-mmx ...@@ -45,3 +45,8 @@ cflags-$(CONFIG_MGEODEGX1) += -march=pentium-mmx
# cpu entries # cpu entries
cflags-$(CONFIG_X86_GENERIC) += $(call tune,generic,$(call tune,i686)) cflags-$(CONFIG_X86_GENERIC) += $(call tune,generic,$(call tune,i686))
# Bug fix for binutils: this option is required in order to keep
# binutils from generating NOPL instructions against our will.
ifneq ($(CONFIG_X86_P6_NOP),y)
cflags-y += $(call cc-option,-Wa$(comma)-mtune=generic32,)
endif
...@@ -72,9 +72,7 @@ KBUILD_CFLAGS := $(LINUXINCLUDE) -g -Os -D_SETUP -D__KERNEL__ \ ...@@ -72,9 +72,7 @@ KBUILD_CFLAGS := $(LINUXINCLUDE) -g -Os -D_SETUP -D__KERNEL__ \
KBUILD_CFLAGS += $(call cc-option,-m32) KBUILD_CFLAGS += $(call cc-option,-m32)
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
$(obj)/zImage: IMAGE_OFFSET := 0x1000
$(obj)/zImage: asflags-y := $(SVGA_MODE) $(RAMDISK) $(obj)/zImage: asflags-y := $(SVGA_MODE) $(RAMDISK)
$(obj)/bzImage: IMAGE_OFFSET := 0x100000
$(obj)/bzImage: ccflags-y := -D__BIG_KERNEL__ $(obj)/bzImage: ccflags-y := -D__BIG_KERNEL__
$(obj)/bzImage: asflags-y := $(SVGA_MODE) $(RAMDISK) -D__BIG_KERNEL__ $(obj)/bzImage: asflags-y := $(SVGA_MODE) $(RAMDISK) -D__BIG_KERNEL__
$(obj)/bzImage: BUILDFLAGS := -b $(obj)/bzImage: BUILDFLAGS := -b
...@@ -117,7 +115,7 @@ $(obj)/setup.bin: $(obj)/setup.elf FORCE ...@@ -117,7 +115,7 @@ $(obj)/setup.bin: $(obj)/setup.elf FORCE
$(call if_changed,objcopy) $(call if_changed,objcopy)
$(obj)/compressed/vmlinux: FORCE $(obj)/compressed/vmlinux: FORCE
$(Q)$(MAKE) $(build)=$(obj)/compressed IMAGE_OFFSET=$(IMAGE_OFFSET) $@ $(Q)$(MAKE) $(build)=$(obj)/compressed $@
# Set this if you want to pass append arguments to the zdisk/fdimage/isoimage kernel # Set this if you want to pass append arguments to the zdisk/fdimage/isoimage kernel
FDARGS = FDARGS =
...@@ -181,6 +179,7 @@ isoimage: $(BOOTIMAGE) ...@@ -181,6 +179,7 @@ isoimage: $(BOOTIMAGE)
mkisofs -J -r -o $(obj)/image.iso -b isolinux.bin -c boot.cat \ mkisofs -J -r -o $(obj)/image.iso -b isolinux.bin -c boot.cat \
-no-emul-boot -boot-load-size 4 -boot-info-table \ -no-emul-boot -boot-load-size 4 -boot-info-table \
$(obj)/isoimage $(obj)/isoimage
isohybrid $(obj)/image.iso 2>/dev/null || true
rm -rf $(obj)/isoimage rm -rf $(obj)/isoimage
zlilo: $(BOOTIMAGE) zlilo: $(BOOTIMAGE)
......
...@@ -27,9 +27,8 @@ $(obj)/vmlinux.bin: vmlinux FORCE ...@@ -27,9 +27,8 @@ $(obj)/vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy) $(call if_changed,objcopy)
ifeq ($(CONFIG_X86_32),y) targets += vmlinux.bin.all vmlinux.relocs relocs
targets += vmlinux.bin.all vmlinux.relocs hostprogs-$(CONFIG_X86_32) += relocs
hostprogs-y := relocs
quiet_cmd_relocs = RELOCS $@ quiet_cmd_relocs = RELOCS $@
cmd_relocs = $(obj)/relocs $< > $@;$(obj)/relocs --abs-relocs $< cmd_relocs = $(obj)/relocs $< > $@;$(obj)/relocs --abs-relocs $<
...@@ -43,6 +42,8 @@ quiet_cmd_relocbin = BUILD $@ ...@@ -43,6 +42,8 @@ quiet_cmd_relocbin = BUILD $@
$(obj)/vmlinux.bin.all: $(vmlinux.bin.all-y) FORCE $(obj)/vmlinux.bin.all: $(vmlinux.bin.all-y) FORCE
$(call if_changed,relocbin) $(call if_changed,relocbin)
ifeq ($(CONFIG_X86_32),y)
ifdef CONFIG_RELOCATABLE ifdef CONFIG_RELOCATABLE
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin.all FORCE $(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin.all FORCE
$(call if_changed,gzip) $(call if_changed,gzip)
...@@ -59,6 +60,5 @@ $(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE ...@@ -59,6 +60,5 @@ $(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
LDFLAGS_piggy.o := -r --format binary --oformat elf64-x86-64 -T LDFLAGS_piggy.o := -r --format binary --oformat elf64-x86-64 -T
endif endif
$(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.gz FORCE $(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.gz FORCE
$(call if_changed,ld) $(call if_changed,ld)
...@@ -41,6 +41,7 @@ static u32 read_mbr_sig(u8 devno, struct edd_info *ei, u32 *mbrsig) ...@@ -41,6 +41,7 @@ static u32 read_mbr_sig(u8 devno, struct edd_info *ei, u32 *mbrsig)
char *mbrbuf_ptr, *mbrbuf_end; char *mbrbuf_ptr, *mbrbuf_end;
u32 buf_base, mbr_base; u32 buf_base, mbr_base;
extern char _end[]; extern char _end[];
u16 mbr_magic;
sector_size = ei->params.bytes_per_sector; sector_size = ei->params.bytes_per_sector;
if (!sector_size) if (!sector_size)
...@@ -58,11 +59,15 @@ static u32 read_mbr_sig(u8 devno, struct edd_info *ei, u32 *mbrsig) ...@@ -58,11 +59,15 @@ static u32 read_mbr_sig(u8 devno, struct edd_info *ei, u32 *mbrsig)
if (mbrbuf_end > (char *)(size_t)boot_params.hdr.heap_end_ptr) if (mbrbuf_end > (char *)(size_t)boot_params.hdr.heap_end_ptr)
return -1; return -1;
memset(mbrbuf_ptr, 0, sector_size);
if (read_mbr(devno, mbrbuf_ptr)) if (read_mbr(devno, mbrbuf_ptr))
return -1; return -1;
*mbrsig = *(u32 *)&mbrbuf_ptr[EDD_MBR_SIG_OFFSET]; *mbrsig = *(u32 *)&mbrbuf_ptr[EDD_MBR_SIG_OFFSET];
return 0; mbr_magic = *(u16 *)&mbrbuf_ptr[510];
/* check for valid MBR magic */
return mbr_magic == 0xAA55 ? 0 : -1;
} }
static int get_edd_info(u8 devno, struct edd_info *ei) static int get_edd_info(u8 devno, struct edd_info *ei)
......
...@@ -224,7 +224,7 @@ static void vesa_store_pm_info(void) ...@@ -224,7 +224,7 @@ static void vesa_store_pm_info(void)
static void vesa_store_mode_params_graphics(void) static void vesa_store_mode_params_graphics(void)
{ {
/* Tell the kernel we're in VESA graphics mode */ /* Tell the kernel we're in VESA graphics mode */
boot_params.screen_info.orig_video_isVGA = 0x23; boot_params.screen_info.orig_video_isVGA = VIDEO_TYPE_VLFB;
/* Mode parameters */ /* Mode parameters */
boot_params.screen_info.vesa_attributes = vminfo.mode_attr; boot_params.screen_info.vesa_attributes = vminfo.mode_attr;
......
...@@ -1535,7 +1535,6 @@ CONFIG_BACKLIGHT_CLASS_DEVICE=y ...@@ -1535,7 +1535,6 @@ CONFIG_BACKLIGHT_CLASS_DEVICE=y
CONFIG_VGA_CONSOLE=y CONFIG_VGA_CONSOLE=y
CONFIG_VGACON_SOFT_SCROLLBACK=y CONFIG_VGACON_SOFT_SCROLLBACK=y
CONFIG_VGACON_SOFT_SCROLLBACK_SIZE=64 CONFIG_VGACON_SOFT_SCROLLBACK_SIZE=64
CONFIG_VIDEO_SELECT=y
CONFIG_DUMMY_CONSOLE=y CONFIG_DUMMY_CONSOLE=y
# CONFIG_FRAMEBUFFER_CONSOLE is not set # CONFIG_FRAMEBUFFER_CONSOLE is not set
CONFIG_LOGO=y CONFIG_LOGO=y
......
...@@ -1505,7 +1505,6 @@ CONFIG_BACKLIGHT_CLASS_DEVICE=y ...@@ -1505,7 +1505,6 @@ CONFIG_BACKLIGHT_CLASS_DEVICE=y
CONFIG_VGA_CONSOLE=y CONFIG_VGA_CONSOLE=y
CONFIG_VGACON_SOFT_SCROLLBACK=y CONFIG_VGACON_SOFT_SCROLLBACK=y
CONFIG_VGACON_SOFT_SCROLLBACK_SIZE=64 CONFIG_VGACON_SOFT_SCROLLBACK_SIZE=64
CONFIG_VIDEO_SELECT=y
CONFIG_DUMMY_CONSOLE=y CONFIG_DUMMY_CONSOLE=y
# CONFIG_FRAMEBUFFER_CONSOLE is not set # CONFIG_FRAMEBUFFER_CONSOLE is not set
CONFIG_LOGO=y CONFIG_LOGO=y
......
...@@ -351,31 +351,28 @@ static int ia32_setup_sigcontext(struct sigcontext_ia32 __user *sc, ...@@ -351,31 +351,28 @@ static int ia32_setup_sigcontext(struct sigcontext_ia32 __user *sc,
savesegment(es, tmp); savesegment(es, tmp);
err |= __put_user(tmp, (unsigned int __user *)&sc->es); err |= __put_user(tmp, (unsigned int __user *)&sc->es);
err |= __put_user((u32)regs->di, &sc->di); err |= __put_user(regs->di, &sc->di);
err |= __put_user((u32)regs->si, &sc->si); err |= __put_user(regs->si, &sc->si);
err |= __put_user((u32)regs->bp, &sc->bp); err |= __put_user(regs->bp, &sc->bp);
err |= __put_user((u32)regs->sp, &sc->sp); err |= __put_user(regs->sp, &sc->sp);
err |= __put_user((u32)regs->bx, &sc->bx); err |= __put_user(regs->bx, &sc->bx);
err |= __put_user((u32)regs->dx, &sc->dx); err |= __put_user(regs->dx, &sc->dx);
err |= __put_user((u32)regs->cx, &sc->cx); err |= __put_user(regs->cx, &sc->cx);
err |= __put_user((u32)regs->ax, &sc->ax); err |= __put_user(regs->ax, &sc->ax);
err |= __put_user((u32)regs->cs, &sc->cs); err |= __put_user(regs->cs, &sc->cs);
err |= __put_user((u32)regs->ss, &sc->ss); err |= __put_user(regs->ss, &sc->ss);
err |= __put_user(current->thread.trap_no, &sc->trapno); err |= __put_user(current->thread.trap_no, &sc->trapno);
err |= __put_user(current->thread.error_code, &sc->err); err |= __put_user(current->thread.error_code, &sc->err);
err |= __put_user((u32)regs->ip, &sc->ip); err |= __put_user(regs->ip, &sc->ip);
err |= __put_user((u32)regs->flags, &sc->flags); err |= __put_user(regs->flags, &sc->flags);
err |= __put_user((u32)regs->sp, &sc->sp_at_signal); err |= __put_user(regs->sp, &sc->sp_at_signal);
tmp = save_i387_xstate_ia32(fpstate); tmp = save_i387_xstate_ia32(fpstate);
if (tmp < 0) if (tmp < 0)
err = -EFAULT; err = -EFAULT;
else { else
clear_used_math();
stts();
err |= __put_user(ptr_to_compat(tmp ? fpstate : NULL), err |= __put_user(ptr_to_compat(tmp ? fpstate : NULL),
&sc->fpstate); &sc->fpstate);
}
/* non-iBCS2 extensions.. */ /* non-iBCS2 extensions.. */
err |= __put_user(mask, &sc->oldmask); err |= __put_user(mask, &sc->oldmask);
...@@ -444,21 +441,18 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka, ...@@ -444,21 +441,18 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
frame = get_sigframe(ka, regs, sizeof(*frame), &fpstate); frame = get_sigframe(ka, regs, sizeof(*frame), &fpstate);
if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
goto give_sigsegv; return -EFAULT;
err |= __put_user(sig, &frame->sig); if (__put_user(sig, &frame->sig))
if (err) return -EFAULT;
goto give_sigsegv;
err |= ia32_setup_sigcontext(&frame->sc, fpstate, regs, set->sig[0]); if (ia32_setup_sigcontext(&frame->sc, fpstate, regs, set->sig[0]))
if (err) return -EFAULT;
goto give_sigsegv;
if (_COMPAT_NSIG_WORDS > 1) { if (_COMPAT_NSIG_WORDS > 1) {
err |= __copy_to_user(frame->extramask, &set->sig[1], if (__copy_to_user(frame->extramask, &set->sig[1],
sizeof(frame->extramask)); sizeof(frame->extramask)))
if (err) return -EFAULT;
goto give_sigsegv;
} }
if (ka->sa.sa_flags & SA_RESTORER) { if (ka->sa.sa_flags & SA_RESTORER) {
...@@ -479,7 +473,7 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka, ...@@ -479,7 +473,7 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
*/ */
err |= __copy_to_user(frame->retcode, &code, 8); err |= __copy_to_user(frame->retcode, &code, 8);
if (err) if (err)
goto give_sigsegv; return -EFAULT;
/* Set up registers for signal handler */ /* Set up registers for signal handler */
regs->sp = (unsigned long) frame; regs->sp = (unsigned long) frame;
...@@ -502,10 +496,6 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka, ...@@ -502,10 +496,6 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
#endif #endif
return 0; return 0;
give_sigsegv:
force_sigsegv(sig, current);
return -EFAULT;
} }
int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
...@@ -533,14 +523,14 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, ...@@ -533,14 +523,14 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
frame = get_sigframe(ka, regs, sizeof(*frame), &fpstate); frame = get_sigframe(ka, regs, sizeof(*frame), &fpstate);
if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
goto give_sigsegv; return -EFAULT;
err |= __put_user(sig, &frame->sig); err |= __put_user(sig, &frame->sig);
err |= __put_user(ptr_to_compat(&frame->info), &frame->pinfo); err |= __put_user(ptr_to_compat(&frame->info), &frame->pinfo);
err |= __put_user(ptr_to_compat(&frame->uc), &frame->puc); err |= __put_user(ptr_to_compat(&frame->uc), &frame->puc);
err |= copy_siginfo_to_user32(&frame->info, info); err |= copy_siginfo_to_user32(&frame->info, info);
if (err) if (err)
goto give_sigsegv; return -EFAULT;
/* Create the ucontext. */ /* Create the ucontext. */
if (cpu_has_xsave) if (cpu_has_xsave)
...@@ -556,7 +546,7 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, ...@@ -556,7 +546,7 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
regs, set->sig[0]); regs, set->sig[0]);
err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set)); err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
if (err) if (err)
goto give_sigsegv; return -EFAULT;
if (ka->sa.sa_flags & SA_RESTORER) if (ka->sa.sa_flags & SA_RESTORER)
restorer = ka->sa.sa_restorer; restorer = ka->sa.sa_restorer;
...@@ -571,7 +561,7 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, ...@@ -571,7 +561,7 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
*/ */
err |= __copy_to_user(frame->retcode, &code, 8); err |= __copy_to_user(frame->retcode, &code, 8);
if (err) if (err)
goto give_sigsegv; return -EFAULT;
/* Set up registers for signal handler */ /* Set up registers for signal handler */
regs->sp = (unsigned long) frame; regs->sp = (unsigned long) frame;
...@@ -599,8 +589,4 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, ...@@ -599,8 +589,4 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
#endif #endif
return 0; return 0;
give_sigsegv:
force_sigsegv(sig, current);
return -EFAULT;
} }
...@@ -10,7 +10,7 @@ ifdef CONFIG_FTRACE ...@@ -10,7 +10,7 @@ ifdef CONFIG_FTRACE
# Do not profile debug and lowlevel utilities # Do not profile debug and lowlevel utilities
CFLAGS_REMOVE_tsc.o = -pg CFLAGS_REMOVE_tsc.o = -pg
CFLAGS_REMOVE_rtc.o = -pg CFLAGS_REMOVE_rtc.o = -pg
CFLAGS_REMOVE_paravirt.o = -pg CFLAGS_REMOVE_paravirt-spinlocks.o = -pg
endif endif
# #
...@@ -51,7 +51,6 @@ obj-$(CONFIG_X86_BIOS_REBOOT) += reboot.o ...@@ -51,7 +51,6 @@ obj-$(CONFIG_X86_BIOS_REBOOT) += reboot.o
obj-$(CONFIG_MCA) += mca_32.o obj-$(CONFIG_MCA) += mca_32.o
obj-$(CONFIG_X86_MSR) += msr.o obj-$(CONFIG_X86_MSR) += msr.o
obj-$(CONFIG_X86_CPUID) += cpuid.o obj-$(CONFIG_X86_CPUID) += cpuid.o
obj-$(CONFIG_MICROCODE) += microcode.o
obj-$(CONFIG_PCI) += early-quirks.o obj-$(CONFIG_PCI) += early-quirks.o
apm-y := apm_32.o apm-y := apm_32.o
obj-$(CONFIG_APM) += apm.o obj-$(CONFIG_APM) += apm.o
...@@ -90,7 +89,7 @@ obj-$(CONFIG_DEBUG_NX_TEST) += test_nx.o ...@@ -90,7 +89,7 @@ obj-$(CONFIG_DEBUG_NX_TEST) += test_nx.o
obj-$(CONFIG_VMI) += vmi_32.o vmiclock_32.o obj-$(CONFIG_VMI) += vmi_32.o vmiclock_32.o
obj-$(CONFIG_KVM_GUEST) += kvm.o obj-$(CONFIG_KVM_GUEST) += kvm.o
obj-$(CONFIG_KVM_CLOCK) += kvmclock.o obj-$(CONFIG_KVM_CLOCK) += kvmclock.o
obj-$(CONFIG_PARAVIRT) += paravirt.o paravirt_patch_$(BITS).o obj-$(CONFIG_PARAVIRT) += paravirt.o paravirt_patch_$(BITS).o paravirt-spinlocks.o
obj-$(CONFIG_PARAVIRT_CLOCK) += pvclock.o obj-$(CONFIG_PARAVIRT_CLOCK) += pvclock.o
obj-$(CONFIG_PCSPKR_PLATFORM) += pcspeaker.o obj-$(CONFIG_PCSPKR_PLATFORM) += pcspeaker.o
...@@ -100,6 +99,11 @@ scx200-y += scx200_32.o ...@@ -100,6 +99,11 @@ scx200-y += scx200_32.o
obj-$(CONFIG_OLPC) += olpc.o obj-$(CONFIG_OLPC) += olpc.o
microcode-y := microcode_core.o
microcode-$(CONFIG_MICROCODE_INTEL) += microcode_intel.o
microcode-$(CONFIG_MICROCODE_AMD) += microcode_amd.o
obj-$(CONFIG_MICROCODE) += microcode.o
### ###
# 64 bit specific files # 64 bit specific files
ifeq ($(CONFIG_X86_64),y) ifeq ($(CONFIG_X86_64),y)
......
...@@ -1418,8 +1418,16 @@ static int __init force_acpi_ht(const struct dmi_system_id *d) ...@@ -1418,8 +1418,16 @@ static int __init force_acpi_ht(const struct dmi_system_id *d)
*/ */
static int __init dmi_ignore_irq0_timer_override(const struct dmi_system_id *d) static int __init dmi_ignore_irq0_timer_override(const struct dmi_system_id *d)
{ {
pr_notice("%s detected: Ignoring BIOS IRQ0 pin2 override\n", d->ident); /*
acpi_skip_timer_override = 1; * The ati_ixp4x0_rev() early PCI quirk should have set
* the acpi_skip_timer_override flag already:
*/
if (!acpi_skip_timer_override) {
WARN(1, KERN_ERR "ati_ixp4x0 quirk not complete.\n");
pr_notice("%s detected: Ignoring BIOS IRQ0 pin2 override\n",
d->ident);
acpi_skip_timer_override = 1;
}
return 0; return 0;
} }
......
...@@ -1121,16 +1121,5 @@ void __cpuinit cpu_init(void) ...@@ -1121,16 +1121,5 @@ void __cpuinit cpu_init(void)
xsave_init(); xsave_init();
} }
#ifdef CONFIG_HOTPLUG_CPU
void __cpuinit cpu_uninit(void)
{
int cpu = raw_smp_processor_id();
cpu_clear(cpu, cpu_initialized);
/* lazy TLB state */
per_cpu(cpu_tlbstate, cpu).state = 0;
per_cpu(cpu_tlbstate, cpu).active_mm = &init_mm;
}
#endif
#endif #endif
...@@ -66,6 +66,6 @@ struct tss_struct doublefault_tss __cacheline_aligned = { ...@@ -66,6 +66,6 @@ struct tss_struct doublefault_tss __cacheline_aligned = {
.ds = __USER_DS, .ds = __USER_DS,
.fs = __KERNEL_PERCPU, .fs = __KERNEL_PERCPU,
.__cr3 = __pa(swapper_pg_dir) .__cr3 = __phys_addr_const((unsigned long)swapper_pg_dir)
} }
}; };
...@@ -95,6 +95,52 @@ static void __init nvidia_bugs(int num, int slot, int func) ...@@ -95,6 +95,52 @@ static void __init nvidia_bugs(int num, int slot, int func)
} }
static u32 ati_ixp4x0_rev(int num, int slot, int func)
{
u32 d;
u8 b;
b = read_pci_config_byte(num, slot, func, 0xac);
b &= ~(1<<5);
write_pci_config_byte(num, slot, func, 0xac, b);
d = read_pci_config(num, slot, func, 0x70);
d |= 1<<8;
write_pci_config(num, slot, func, 0x70, d);
d = read_pci_config(num, slot, func, 0x8);
d &= 0xff;
return d;
}
static void __init ati_bugs(int num, int slot, int func)
{
#if defined(CONFIG_ACPI) && defined (CONFIG_X86_IO_APIC)
u32 d;
u8 b;
if (acpi_use_timer_override)
return;
d = ati_ixp4x0_rev(num, slot, func);
if (d < 0x82)
acpi_skip_timer_override = 1;
else {
/* check for IRQ0 interrupt swap */
outb(0x72, 0xcd6); b = inb(0xcd7);
if (!(b & 0x2))
acpi_skip_timer_override = 1;
}
if (acpi_skip_timer_override) {
printk(KERN_INFO "SB4X0 revision 0x%x\n", d);
printk(KERN_INFO "Ignoring ACPI timer override.\n");
printk(KERN_INFO "If you got timer trouble "
"try acpi_use_timer_override\n");
}
#endif
}
#ifdef CONFIG_DMAR #ifdef CONFIG_DMAR
static void __init intel_g33_dmar(int num, int slot, int func) static void __init intel_g33_dmar(int num, int slot, int func)
{ {
...@@ -128,6 +174,8 @@ static struct chipset early_qrk[] __initdata = { ...@@ -128,6 +174,8 @@ static struct chipset early_qrk[] __initdata = {
PCI_CLASS_BRIDGE_PCI, PCI_ANY_ID, QFLAG_APPLY_ONCE, via_bugs }, PCI_CLASS_BRIDGE_PCI, PCI_ANY_ID, QFLAG_APPLY_ONCE, via_bugs },
{ PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_K8_NB, { PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_K8_NB,
PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, fix_hypertransport_config }, PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, fix_hypertransport_config },
{ PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_IXP400_SMBUS,
PCI_CLASS_SERIAL_SMBUS, PCI_ANY_ID, 0, ati_bugs },
#ifdef CONFIG_DMAR #ifdef CONFIG_DMAR
{ PCI_VENDOR_ID_INTEL, 0x29c0, { PCI_VENDOR_ID_INTEL, 0x29c0,
PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, intel_g33_dmar }, PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, intel_g33_dmar },
......
This diff is collapsed.
...@@ -468,9 +468,23 @@ static int save_i387_fxsave(struct _fpstate_ia32 __user *buf) ...@@ -468,9 +468,23 @@ static int save_i387_fxsave(struct _fpstate_ia32 __user *buf)
static int save_i387_xsave(void __user *buf) static int save_i387_xsave(void __user *buf)
{ {
struct task_struct *tsk = current;
struct _fpstate_ia32 __user *fx = buf; struct _fpstate_ia32 __user *fx = buf;
int err = 0; int err = 0;
/*
* For legacy compatible, we always set FP/SSE bits in the bit
* vector while saving the state to the user context.
* This will enable us capturing any changes(during sigreturn) to
* the FP/SSE bits by the legacy applications which don't touch
* xstate_bv in the xsave header.
*
* xsave aware applications can change the xstate_bv in the xsave
* header as well as change any contents in the memory layout.
* xrestore as part of sigreturn will capture all the changes.
*/
tsk->thread.xstate->xsave.xsave_hdr.xstate_bv |= XSTATE_FPSSE;
if (save_i387_fxsave(fx) < 0) if (save_i387_fxsave(fx) < 0)
return -1; return -1;
......
...@@ -52,6 +52,8 @@ static int alloc_ldt(mm_context_t *pc, int mincount, int reload) ...@@ -52,6 +52,8 @@ static int alloc_ldt(mm_context_t *pc, int mincount, int reload)
memset(newldt + oldsize * LDT_ENTRY_SIZE, 0, memset(newldt + oldsize * LDT_ENTRY_SIZE, 0,
(mincount - oldsize) * LDT_ENTRY_SIZE); (mincount - oldsize) * LDT_ENTRY_SIZE);
paravirt_alloc_ldt(newldt, mincount);
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
/* CHECKME: Do we really need this ? */ /* CHECKME: Do we really need this ? */
wmb(); wmb();
...@@ -74,6 +76,7 @@ static int alloc_ldt(mm_context_t *pc, int mincount, int reload) ...@@ -74,6 +76,7 @@ static int alloc_ldt(mm_context_t *pc, int mincount, int reload)
#endif #endif
} }
if (oldsize) { if (oldsize) {
paravirt_free_ldt(oldldt, oldsize);
if (oldsize * LDT_ENTRY_SIZE > PAGE_SIZE) if (oldsize * LDT_ENTRY_SIZE > PAGE_SIZE)
vfree(oldldt); vfree(oldldt);
else else
...@@ -85,10 +88,13 @@ static int alloc_ldt(mm_context_t *pc, int mincount, int reload) ...@@ -85,10 +88,13 @@ static int alloc_ldt(mm_context_t *pc, int mincount, int reload)
static inline int copy_ldt(mm_context_t *new, mm_context_t *old) static inline int copy_ldt(mm_context_t *new, mm_context_t *old)
{ {
int err = alloc_ldt(new, old->size, 0); int err = alloc_ldt(new, old->size, 0);
int i;
if (err < 0) if (err < 0)
return err; return err;
memcpy(new->ldt, old->ldt, old->size * LDT_ENTRY_SIZE);
for(i = 0; i < old->size; i++)
write_ldt_entry(new->ldt, i, old->ldt + i * LDT_ENTRY_SIZE);
return 0; return 0;
} }
...@@ -125,6 +131,7 @@ void destroy_context(struct mm_struct *mm) ...@@ -125,6 +131,7 @@ void destroy_context(struct mm_struct *mm)
if (mm == current->active_mm) if (mm == current->active_mm)
clear_LDT(); clear_LDT();
#endif #endif
paravirt_free_ldt(mm->context.ldt, mm->context.size);
if (mm->context.size * LDT_ENTRY_SIZE > PAGE_SIZE) if (mm->context.size * LDT_ENTRY_SIZE > PAGE_SIZE)
vfree(mm->context.ldt); vfree(mm->context.ldt);
else else
......
This diff is collapsed.
This diff is collapsed.
/*
* Split spinlock implementation out into its own file, so it can be
* compiled in a FTRACE-compatible way.
*/
#include <linux/spinlock.h>
#include <linux/module.h>
#include <asm/paravirt.h>
static void default_spin_lock_flags(struct raw_spinlock *lock, unsigned long flags)
{
__raw_spin_lock(lock);
}
struct pv_lock_ops pv_lock_ops = {
#ifdef CONFIG_SMP
.spin_is_locked = __ticket_spin_is_locked,
.spin_is_contended = __ticket_spin_is_contended,
.spin_lock = __ticket_spin_lock,
.spin_lock_flags = default_spin_lock_flags,
.spin_trylock = __ticket_spin_trylock,
.spin_unlock = __ticket_spin_unlock,
#endif
};
EXPORT_SYMBOL(pv_lock_ops);
void __init paravirt_use_bytelocks(void)
{
#ifdef CONFIG_SMP
pv_lock_ops.spin_is_locked = __byte_spin_is_locked;
pv_lock_ops.spin_is_contended = __byte_spin_is_contended;
pv_lock_ops.spin_lock = __byte_spin_lock;
pv_lock_ops.spin_trylock = __byte_spin_trylock;
pv_lock_ops.spin_unlock = __byte_spin_unlock;
#endif
}
...@@ -268,17 +268,6 @@ enum paravirt_lazy_mode paravirt_get_lazy_mode(void) ...@@ -268,17 +268,6 @@ enum paravirt_lazy_mode paravirt_get_lazy_mode(void)
return __get_cpu_var(paravirt_lazy_mode); return __get_cpu_var(paravirt_lazy_mode);
} }
void __init paravirt_use_bytelocks(void)
{
#ifdef CONFIG_SMP
pv_lock_ops.spin_is_locked = __byte_spin_is_locked;
pv_lock_ops.spin_is_contended = __byte_spin_is_contended;
pv_lock_ops.spin_lock = __byte_spin_lock;
pv_lock_ops.spin_trylock = __byte_spin_trylock;
pv_lock_ops.spin_unlock = __byte_spin_unlock;
#endif
}
struct pv_info pv_info = { struct pv_info pv_info = {
.name = "bare hardware", .name = "bare hardware",
.paravirt_enabled = 0, .paravirt_enabled = 0,
...@@ -349,6 +338,10 @@ struct pv_cpu_ops pv_cpu_ops = { ...@@ -349,6 +338,10 @@ struct pv_cpu_ops pv_cpu_ops = {
.write_ldt_entry = native_write_ldt_entry, .write_ldt_entry = native_write_ldt_entry,
.write_gdt_entry = native_write_gdt_entry, .write_gdt_entry = native_write_gdt_entry,
.write_idt_entry = native_write_idt_entry, .write_idt_entry = native_write_idt_entry,
.alloc_ldt = paravirt_nop,
.free_ldt = paravirt_nop,
.load_sp0 = native_load_sp0, .load_sp0 = native_load_sp0,
#if defined(CONFIG_X86_32) || defined(CONFIG_IA32_EMULATION) #if defined(CONFIG_X86_32) || defined(CONFIG_IA32_EMULATION)
...@@ -460,18 +453,6 @@ struct pv_mmu_ops pv_mmu_ops = { ...@@ -460,18 +453,6 @@ struct pv_mmu_ops pv_mmu_ops = {
.set_fixmap = native_set_fixmap, .set_fixmap = native_set_fixmap,
}; };
struct pv_lock_ops pv_lock_ops = {
#ifdef CONFIG_SMP
.spin_is_locked = __ticket_spin_is_locked,
.spin_is_contended = __ticket_spin_is_contended,
.spin_lock = __ticket_spin_lock,
.spin_trylock = __ticket_spin_trylock,
.spin_unlock = __ticket_spin_unlock,
#endif
};
EXPORT_SYMBOL(pv_lock_ops);
EXPORT_SYMBOL_GPL(pv_time_ops); EXPORT_SYMBOL_GPL(pv_time_ops);
EXPORT_SYMBOL (pv_cpu_ops); EXPORT_SYMBOL (pv_cpu_ops);
EXPORT_SYMBOL (pv_mmu_ops); EXPORT_SYMBOL (pv_mmu_ops);
......
...@@ -76,47 +76,12 @@ unsigned long thread_saved_pc(struct task_struct *tsk) ...@@ -76,47 +76,12 @@ unsigned long thread_saved_pc(struct task_struct *tsk)
return ((unsigned long *)tsk->thread.sp)[3]; return ((unsigned long *)tsk->thread.sp)[3];
} }
#ifdef CONFIG_HOTPLUG_CPU #ifndef CONFIG_SMP
#include <asm/nmi.h>
static void cpu_exit_clear(void)
{
int cpu = raw_smp_processor_id();
idle_task_exit();
cpu_uninit();
irq_ctx_exit(cpu);
cpu_clear(cpu, cpu_callout_map);
cpu_clear(cpu, cpu_callin_map);
numa_remove_cpu(cpu);
c1e_remove_cpu(cpu);
}
/* We don't actually take CPU down, just spin without interrupts. */
static inline void play_dead(void)
{
/* This must be done before dead CPU ack */
cpu_exit_clear();
mb();
/* Ack it */
__get_cpu_var(cpu_state) = CPU_DEAD;
/*
* With physical CPU hotplug, we should halt the cpu
*/
local_irq_disable();
/* mask all interrupts, flush any and all caches, and halt */
wbinvd_halt();
}
#else
static inline void play_dead(void) static inline void play_dead(void)
{ {
BUG(); BUG();
} }
#endif /* CONFIG_HOTPLUG_CPU */ #endif
/* /*
* The idle thread. There's no useful work to be * The idle thread. There's no useful work to be
......
...@@ -86,30 +86,12 @@ void exit_idle(void) ...@@ -86,30 +86,12 @@ void exit_idle(void)
__exit_idle(); __exit_idle();
} }
#ifdef CONFIG_HOTPLUG_CPU #ifndef CONFIG_SMP
DECLARE_PER_CPU(int, cpu_state);
#include <linux/nmi.h>
/* We halt the CPU with physical CPU hotplug */
static inline void play_dead(void)
{
idle_task_exit();
c1e_remove_cpu(raw_smp_processor_id());
mb();
/* Ack it */
__get_cpu_var(cpu_state) = CPU_DEAD;
local_irq_disable();
/* mask all interrupts, flush any and all caches, and halt */
wbinvd_halt();
}
#else
static inline void play_dead(void) static inline void play_dead(void)
{ {
BUG(); BUG();
} }
#endif /* CONFIG_HOTPLUG_CPU */ #endif
/* /*
* The idle thread. There's no useful work to be * The idle thread. There's no useful work to be
......
...@@ -40,7 +40,9 @@ enum x86_regset { ...@@ -40,7 +40,9 @@ enum x86_regset {
REGSET_GENERAL, REGSET_GENERAL,
REGSET_FP, REGSET_FP,
REGSET_XFP, REGSET_XFP,
REGSET_IOPERM64 = REGSET_XFP,
REGSET_TLS, REGSET_TLS,
REGSET_IOPERM32,
}; };
/* /*
...@@ -555,6 +557,29 @@ static int ptrace_set_debugreg(struct task_struct *child, ...@@ -555,6 +557,29 @@ static int ptrace_set_debugreg(struct task_struct *child,
return 0; return 0;
} }
/*
* These access the current or another (stopped) task's io permission
* bitmap for debugging or core dump.
*/
static int ioperm_active(struct task_struct *target,
const struct user_regset *regset)
{
return target->thread.io_bitmap_max / regset->size;
}
static int ioperm_get(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
void *kbuf, void __user *ubuf)
{
if (!target->thread.io_bitmap_ptr)
return -ENXIO;
return user_regset_copyout(&pos, &count, &kbuf, &ubuf,
target->thread.io_bitmap_ptr,
0, IO_BITMAP_BYTES);
}
#ifdef CONFIG_X86_PTRACE_BTS #ifdef CONFIG_X86_PTRACE_BTS
/* /*
* The configuration for a particular BTS hardware implementation. * The configuration for a particular BTS hardware implementation.
...@@ -1385,6 +1410,12 @@ static const struct user_regset x86_64_regsets[] = { ...@@ -1385,6 +1410,12 @@ static const struct user_regset x86_64_regsets[] = {
.size = sizeof(long), .align = sizeof(long), .size = sizeof(long), .align = sizeof(long),
.active = xfpregs_active, .get = xfpregs_get, .set = xfpregs_set .active = xfpregs_active, .get = xfpregs_get, .set = xfpregs_set
}, },
[REGSET_IOPERM64] = {
.core_note_type = NT_386_IOPERM,
.n = IO_BITMAP_LONGS,
.size = sizeof(long), .align = sizeof(long),
.active = ioperm_active, .get = ioperm_get
},
}; };
static const struct user_regset_view user_x86_64_view = { static const struct user_regset_view user_x86_64_view = {
...@@ -1431,6 +1462,12 @@ static const struct user_regset x86_32_regsets[] = { ...@@ -1431,6 +1462,12 @@ static const struct user_regset x86_32_regsets[] = {
.active = regset_tls_active, .active = regset_tls_active,
.get = regset_tls_get, .set = regset_tls_set .get = regset_tls_get, .set = regset_tls_set
}, },
[REGSET_IOPERM32] = {
.core_note_type = NT_386_IOPERM,
.n = IO_BITMAP_BYTES / sizeof(u32),
.size = sizeof(u32), .align = sizeof(u32),
.active = ioperm_active, .get = ioperm_get
},
}; };
static const struct user_regset_view user_x86_32_view = { static const struct user_regset_view user_x86_32_view = {
...@@ -1452,7 +1489,8 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task) ...@@ -1452,7 +1489,8 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task)
#endif #endif
} }
void send_sigtrap(struct task_struct *tsk, struct pt_regs *regs, int error_code) void send_sigtrap(struct task_struct *tsk, struct pt_regs *regs,
int error_code, int si_code)
{ {
struct siginfo info; struct siginfo info;
...@@ -1461,7 +1499,7 @@ void send_sigtrap(struct task_struct *tsk, struct pt_regs *regs, int error_code) ...@@ -1461,7 +1499,7 @@ void send_sigtrap(struct task_struct *tsk, struct pt_regs *regs, int error_code)
memset(&info, 0, sizeof(info)); memset(&info, 0, sizeof(info));
info.si_signo = SIGTRAP; info.si_signo = SIGTRAP;
info.si_code = TRAP_BRKPT; info.si_code = si_code;
/* User-mode ip? */ /* User-mode ip? */
info.si_addr = user_mode_vm(regs) ? (void __user *) regs->ip : NULL; info.si_addr = user_mode_vm(regs) ? (void __user *) regs->ip : NULL;
...@@ -1548,5 +1586,5 @@ asmregparm void syscall_trace_leave(struct pt_regs *regs) ...@@ -1548,5 +1586,5 @@ asmregparm void syscall_trace_leave(struct pt_regs *regs)
*/ */
if (test_thread_flag(TIF_SINGLESTEP) && if (test_thread_flag(TIF_SINGLESTEP) &&
tracehook_consider_fatal_signal(current, SIGTRAP, SIG_DFL)) tracehook_consider_fatal_signal(current, SIGTRAP, SIG_DFL))
send_sigtrap(current, regs, 0); send_sigtrap(current, regs, 0, TRAP_BRKPT);
} }
...@@ -581,6 +581,190 @@ static struct x86_quirks default_x86_quirks __initdata; ...@@ -581,6 +581,190 @@ static struct x86_quirks default_x86_quirks __initdata;
struct x86_quirks *x86_quirks __initdata = &default_x86_quirks; struct x86_quirks *x86_quirks __initdata = &default_x86_quirks;
/*
* Some BIOSes seem to corrupt the low 64k of memory during events
* like suspend/resume and unplugging an HDMI cable. Reserve all
* remaining free memory in that area and fill it with a distinct
* pattern.
*/
#ifdef CONFIG_X86_CHECK_BIOS_CORRUPTION
#define MAX_SCAN_AREAS 8
static int __read_mostly memory_corruption_check = -1;
static unsigned __read_mostly corruption_check_size = 64*1024;
static unsigned __read_mostly corruption_check_period = 60; /* seconds */
static struct e820entry scan_areas[MAX_SCAN_AREAS];
static int num_scan_areas;
static int set_corruption_check(char *arg)
{
char *end;
memory_corruption_check = simple_strtol(arg, &end, 10);
return (*end == 0) ? 0 : -EINVAL;
}
early_param("memory_corruption_check", set_corruption_check);
static int set_corruption_check_period(char *arg)
{
char *end;
corruption_check_period = simple_strtoul(arg, &end, 10);
return (*end == 0) ? 0 : -EINVAL;
}
early_param("memory_corruption_check_period", set_corruption_check_period);
static int set_corruption_check_size(char *arg)
{
char *end;
unsigned size;
size = memparse(arg, &end);
if (*end == '\0')
corruption_check_size = size;
return (size == corruption_check_size) ? 0 : -EINVAL;
}
early_param("memory_corruption_check_size", set_corruption_check_size);
static void __init setup_bios_corruption_check(void)
{
u64 addr = PAGE_SIZE; /* assume first page is reserved anyway */
if (memory_corruption_check == -1) {
memory_corruption_check =
#ifdef CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK
1
#else
0
#endif
;
}
if (corruption_check_size == 0)
memory_corruption_check = 0;
if (!memory_corruption_check)
return;
corruption_check_size = round_up(corruption_check_size, PAGE_SIZE);
while(addr < corruption_check_size && num_scan_areas < MAX_SCAN_AREAS) {
u64 size;
addr = find_e820_area_size(addr, &size, PAGE_SIZE);
if (addr == 0)
break;
if ((addr + size) > corruption_check_size)
size = corruption_check_size - addr;
if (size == 0)
break;
e820_update_range(addr, size, E820_RAM, E820_RESERVED);
scan_areas[num_scan_areas].addr = addr;
scan_areas[num_scan_areas].size = size;
num_scan_areas++;
/* Assume we've already mapped this early memory */
memset(__va(addr), 0, size);
addr += size;
}
printk(KERN_INFO "Scanning %d areas for low memory corruption\n",
num_scan_areas);
update_e820();
}
static struct timer_list periodic_check_timer;
void check_for_bios_corruption(void)
{
int i;
int corruption = 0;
if (!memory_corruption_check)
return;
for(i = 0; i < num_scan_areas; i++) {
unsigned long *addr = __va(scan_areas[i].addr);
unsigned long size = scan_areas[i].size;
for(; size; addr++, size -= sizeof(unsigned long)) {
if (!*addr)
continue;
printk(KERN_ERR "Corrupted low memory at %p (%lx phys) = %08lx\n",
addr, __pa(addr), *addr);
corruption = 1;
*addr = 0;
}
}
WARN(corruption, KERN_ERR "Memory corruption detected in low memory\n");
}
static void periodic_check_for_corruption(unsigned long data)
{
check_for_bios_corruption();
mod_timer(&periodic_check_timer, round_jiffies(jiffies + corruption_check_period*HZ));
}
void start_periodic_check_for_corruption(void)
{
if (!memory_corruption_check || corruption_check_period == 0)
return;
printk(KERN_INFO "Scanning for low memory corruption every %d seconds\n",
corruption_check_period);
init_timer(&periodic_check_timer);
periodic_check_timer.function = &periodic_check_for_corruption;
periodic_check_for_corruption(0);
}
#endif
static int __init dmi_low_memory_corruption(const struct dmi_system_id *d)
{
printk(KERN_NOTICE
"%s detected: BIOS may corrupt low RAM, working it around.\n",
d->ident);
e820_update_range(0, 0x10000, E820_RAM, E820_RESERVED);
sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
return 0;
}
/* List of systems that have known low memory corruption BIOS problems */
static struct dmi_system_id __initdata bad_bios_dmi_table[] = {
#ifdef CONFIG_X86_RESERVE_LOW_64K
{
.callback = dmi_low_memory_corruption,
.ident = "AMI BIOS",
.matches = {
DMI_MATCH(DMI_BIOS_VENDOR, "American Megatrends Inc."),
},
},
{
.callback = dmi_low_memory_corruption,
.ident = "Phoenix BIOS",
.matches = {
DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies, LTD"),
},
},
#endif
{}
};
/* /*
* Determine if we were loaded by an EFI loader. If so, then we have also been * Determine if we were loaded by an EFI loader. If so, then we have also been
* passed the efi memmap, systab, etc., so we should use these data structures * passed the efi memmap, systab, etc., so we should use these data structures
...@@ -715,6 +899,10 @@ void __init setup_arch(char **cmdline_p) ...@@ -715,6 +899,10 @@ void __init setup_arch(char **cmdline_p)
finish_e820_parsing(); finish_e820_parsing();
dmi_scan_machine();
dmi_check_system(bad_bios_dmi_table);
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
probe_roms(); probe_roms();
#endif #endif
...@@ -771,6 +959,10 @@ void __init setup_arch(char **cmdline_p) ...@@ -771,6 +959,10 @@ void __init setup_arch(char **cmdline_p)
high_memory = (void *)__va(max_pfn * PAGE_SIZE - 1) + 1; high_memory = (void *)__va(max_pfn * PAGE_SIZE - 1) + 1;
#endif #endif
#ifdef CONFIG_X86_CHECK_BIOS_CORRUPTION
setup_bios_corruption_check();
#endif
/* max_pfn_mapped is updated here */ /* max_pfn_mapped is updated here */
max_low_pfn_mapped = init_memory_mapping(0, max_low_pfn<<PAGE_SHIFT); max_low_pfn_mapped = init_memory_mapping(0, max_low_pfn<<PAGE_SHIFT);
max_pfn_mapped = max_low_pfn_mapped; max_pfn_mapped = max_low_pfn_mapped;
...@@ -799,8 +991,6 @@ void __init setup_arch(char **cmdline_p) ...@@ -799,8 +991,6 @@ void __init setup_arch(char **cmdline_p)
vsmp_init(); vsmp_init();
#endif #endif
dmi_scan_machine();
io_delay_init(); io_delay_init();
/* /*
...@@ -903,3 +1093,5 @@ void __init setup_arch(char **cmdline_p) ...@@ -903,3 +1093,5 @@ void __init setup_arch(char **cmdline_p)
#endif #endif
#endif #endif
} }
This diff is collapsed.
This diff is collapsed.
...@@ -214,12 +214,16 @@ void smp_call_function_single_interrupt(struct pt_regs *regs) ...@@ -214,12 +214,16 @@ void smp_call_function_single_interrupt(struct pt_regs *regs)
struct smp_ops smp_ops = { struct smp_ops smp_ops = {
.smp_prepare_boot_cpu = native_smp_prepare_boot_cpu, .smp_prepare_boot_cpu = native_smp_prepare_boot_cpu,
.smp_prepare_cpus = native_smp_prepare_cpus, .smp_prepare_cpus = native_smp_prepare_cpus,
.cpu_up = native_cpu_up,
.smp_cpus_done = native_smp_cpus_done, .smp_cpus_done = native_smp_cpus_done,
.smp_send_stop = native_smp_send_stop, .smp_send_stop = native_smp_send_stop,
.smp_send_reschedule = native_smp_send_reschedule, .smp_send_reschedule = native_smp_send_reschedule,
.cpu_up = native_cpu_up,
.cpu_die = native_cpu_die,
.cpu_disable = native_cpu_disable,
.play_dead = native_play_dead,
.send_call_func_ipi = native_send_call_func_ipi, .send_call_func_ipi = native_send_call_func_ipi,
.send_call_func_single_ipi = native_send_call_func_single_ipi, .send_call_func_single_ipi = native_send_call_func_single_ipi,
}; };
......
...@@ -52,6 +52,7 @@ ...@@ -52,6 +52,7 @@
#include <asm/desc.h> #include <asm/desc.h>
#include <asm/nmi.h> #include <asm/nmi.h>
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/idle.h>
#include <asm/smp.h> #include <asm/smp.h>
#include <asm/trampoline.h> #include <asm/trampoline.h>
#include <asm/cpu.h> #include <asm/cpu.h>
...@@ -1344,25 +1345,9 @@ static void __ref remove_cpu_from_maps(int cpu) ...@@ -1344,25 +1345,9 @@ static void __ref remove_cpu_from_maps(int cpu)
numa_remove_cpu(cpu); numa_remove_cpu(cpu);
} }
int __cpu_disable(void) void cpu_disable_common(void)
{ {
int cpu = smp_processor_id(); int cpu = smp_processor_id();
/*
* Perhaps use cpufreq to drop frequency, but that could go
* into generic code.
*
* We won't take down the boot processor on i386 due to some
* interrupts only being able to be serviced by the BSP.
* Especially so if we're not using an IOAPIC -zwane
*/
if (cpu == 0)
return -EBUSY;
if (nmi_watchdog == NMI_LOCAL_APIC)
stop_apic_nmi_watchdog(NULL);
clear_local_APIC();
/* /*
* HACK: * HACK:
* Allow any queued timer interrupts to get serviced * Allow any queued timer interrupts to get serviced
...@@ -1380,10 +1365,32 @@ int __cpu_disable(void) ...@@ -1380,10 +1365,32 @@ int __cpu_disable(void)
remove_cpu_from_maps(cpu); remove_cpu_from_maps(cpu);
unlock_vector_lock(); unlock_vector_lock();
fixup_irqs(cpu_online_map); fixup_irqs(cpu_online_map);
}
int native_cpu_disable(void)
{
int cpu = smp_processor_id();
/*
* Perhaps use cpufreq to drop frequency, but that could go
* into generic code.
*
* We won't take down the boot processor on i386 due to some
* interrupts only being able to be serviced by the BSP.
* Especially so if we're not using an IOAPIC -zwane
*/
if (cpu == 0)
return -EBUSY;
if (nmi_watchdog == NMI_LOCAL_APIC)
stop_apic_nmi_watchdog(NULL);
clear_local_APIC();
cpu_disable_common();
return 0; return 0;
} }
void __cpu_die(unsigned int cpu) void native_cpu_die(unsigned int cpu)
{ {
/* We don't do anything here: idle task is faking death itself. */ /* We don't do anything here: idle task is faking death itself. */
unsigned int i; unsigned int i;
...@@ -1400,15 +1407,45 @@ void __cpu_die(unsigned int cpu) ...@@ -1400,15 +1407,45 @@ void __cpu_die(unsigned int cpu)
} }
printk(KERN_ERR "CPU %u didn't die...\n", cpu); printk(KERN_ERR "CPU %u didn't die...\n", cpu);
} }
void play_dead_common(void)
{
idle_task_exit();
reset_lazy_tlbstate();
irq_ctx_exit(raw_smp_processor_id());
c1e_remove_cpu(raw_smp_processor_id());
mb();
/* Ack it */
__get_cpu_var(cpu_state) = CPU_DEAD;
/*
* With physical CPU hotplug, we should halt the cpu
*/
local_irq_disable();
}
void native_play_dead(void)
{
play_dead_common();
wbinvd_halt();
}
#else /* ... !CONFIG_HOTPLUG_CPU */ #else /* ... !CONFIG_HOTPLUG_CPU */
int __cpu_disable(void) int native_cpu_disable(void)
{ {
return -ENOSYS; return -ENOSYS;
} }
void __cpu_die(unsigned int cpu) void native_cpu_die(unsigned int cpu)
{ {
/* We said "no" in __cpu_disable */ /* We said "no" in __cpu_disable */
BUG(); BUG();
} }
void native_play_dead(void)
{
BUG();
}
#endif #endif
...@@ -241,3 +241,11 @@ void flush_tlb_all(void) ...@@ -241,3 +241,11 @@ void flush_tlb_all(void)
on_each_cpu(do_flush_tlb_all, NULL, 1); on_each_cpu(do_flush_tlb_all, NULL, 1);
} }
void reset_lazy_tlbstate(void)
{
int cpu = raw_smp_processor_id();
per_cpu(cpu_tlbstate, cpu).state = 0;
per_cpu(cpu_tlbstate, cpu).active_mm = &init_mm;
}
...@@ -891,6 +891,7 @@ void __kprobes do_debug(struct pt_regs *regs, long error_code) ...@@ -891,6 +891,7 @@ void __kprobes do_debug(struct pt_regs *regs, long error_code)
{ {
struct task_struct *tsk = current; struct task_struct *tsk = current;
unsigned int condition; unsigned int condition;
int si_code;
trace_hardirqs_fixup(); trace_hardirqs_fixup();
...@@ -935,8 +936,9 @@ void __kprobes do_debug(struct pt_regs *regs, long error_code) ...@@ -935,8 +936,9 @@ void __kprobes do_debug(struct pt_regs *regs, long error_code)
goto clear_TF_reenable; goto clear_TF_reenable;
} }
si_code = get_si_code((unsigned long)condition);
/* Ok, finally something we can handle */ /* Ok, finally something we can handle */
send_sigtrap(tsk, regs, error_code); send_sigtrap(tsk, regs, error_code, si_code);
/* /*
* Disable additional traps. They'll be re-enabled when * Disable additional traps. They'll be re-enabled when
......
...@@ -940,7 +940,7 @@ asmlinkage void __kprobes do_debug(struct pt_regs *regs, ...@@ -940,7 +940,7 @@ asmlinkage void __kprobes do_debug(struct pt_regs *regs,
tsk->thread.error_code = error_code; tsk->thread.error_code = error_code;
info.si_signo = SIGTRAP; info.si_signo = SIGTRAP;
info.si_errno = 0; info.si_errno = 0;
info.si_code = TRAP_BRKPT; info.si_code = get_si_code(condition);
info.si_addr = user_mode(regs) ? (void __user *)regs->ip : NULL; info.si_addr = user_mode(regs) ? (void __user *)regs->ip : NULL;
force_sig_info(SIGTRAP, &info, tsk); force_sig_info(SIGTRAP, &info, tsk);
......
...@@ -95,7 +95,9 @@ int save_i387_xstate(void __user *buf) ...@@ -95,7 +95,9 @@ int save_i387_xstate(void __user *buf)
* Start with clearing the user buffer. This will present a * Start with clearing the user buffer. This will present a
* clean context for the bytes not touched by the fxsave/xsave. * clean context for the bytes not touched by the fxsave/xsave.
*/ */
__clear_user(buf, sig_xstate_size); err = __clear_user(buf, sig_xstate_size);
if (err)
return err;
if (task_thread_info(tsk)->status & TS_XSAVE) if (task_thread_info(tsk)->status & TS_XSAVE)
err = xsave_user(buf); err = xsave_user(buf);
...@@ -114,6 +116,8 @@ int save_i387_xstate(void __user *buf) ...@@ -114,6 +116,8 @@ int save_i387_xstate(void __user *buf)
if (task_thread_info(tsk)->status & TS_XSAVE) { if (task_thread_info(tsk)->status & TS_XSAVE) {
struct _fpstate __user *fx = buf; struct _fpstate __user *fx = buf;
struct _xstate __user *x = buf;
u64 xstate_bv;
err = __copy_to_user(&fx->sw_reserved, &fx_sw_reserved, err = __copy_to_user(&fx->sw_reserved, &fx_sw_reserved,
sizeof(struct _fpx_sw_bytes)); sizeof(struct _fpx_sw_bytes));
...@@ -121,6 +125,31 @@ int save_i387_xstate(void __user *buf) ...@@ -121,6 +125,31 @@ int save_i387_xstate(void __user *buf)
err |= __put_user(FP_XSTATE_MAGIC2, err |= __put_user(FP_XSTATE_MAGIC2,
(__u32 __user *) (buf + sig_xstate_size (__u32 __user *) (buf + sig_xstate_size
- FP_XSTATE_MAGIC2_SIZE)); - FP_XSTATE_MAGIC2_SIZE));
/*
* Read the xstate_bv which we copied (directly from the cpu or
* from the state in task struct) to the user buffers and
* set the FP/SSE bits.
*/
err |= __get_user(xstate_bv, &x->xstate_hdr.xstate_bv);
/*
* For legacy compatible, we always set FP/SSE bits in the bit
* vector while saving the state to the user context. This will
* enable us capturing any changes(during sigreturn) to
* the FP/SSE bits by the legacy applications which don't touch
* xstate_bv in the xsave header.
*
* xsave aware apps can change the xstate_bv in the xsave
* header as well as change any contents in the memory layout.
* xrestore as part of sigreturn will capture all the changes.
*/
xstate_bv |= XSTATE_FPSSE;
err |= __put_user(xstate_bv, &x->xstate_hdr.xstate_bv);
if (err)
return err;
} }
return 1; return 1;
...@@ -272,7 +301,7 @@ void __cpuinit xsave_init(void) ...@@ -272,7 +301,7 @@ void __cpuinit xsave_init(void)
/* /*
* setup the xstate image representing the init state * setup the xstate image representing the init state
*/ */
void setup_xstate_init(void) static void __init setup_xstate_init(void)
{ {
init_xstate_buf = alloc_bootmem(xstate_size); init_xstate_buf = alloc_bootmem(xstate_size);
init_xstate_buf->i387.mxcsr = MXCSR_DEFAULT; init_xstate_buf->i387.mxcsr = MXCSR_DEFAULT;
......
...@@ -914,15 +914,15 @@ LIST_HEAD(pgd_list); ...@@ -914,15 +914,15 @@ LIST_HEAD(pgd_list);
void vmalloc_sync_all(void) void vmalloc_sync_all(void)
{ {
#ifdef CONFIG_X86_32
unsigned long start = VMALLOC_START & PGDIR_MASK;
unsigned long address; unsigned long address;
#ifdef CONFIG_X86_32
if (SHARED_KERNEL_PMD) if (SHARED_KERNEL_PMD)
return; return;
BUILD_BUG_ON(TASK_SIZE & ~PGDIR_MASK); for (address = VMALLOC_START & PMD_MASK;
for (address = start; address >= TASK_SIZE; address += PGDIR_SIZE) { address >= TASK_SIZE && address < FIXADDR_TOP;
address += PMD_SIZE) {
unsigned long flags; unsigned long flags;
struct page *page; struct page *page;
...@@ -935,10 +935,8 @@ void vmalloc_sync_all(void) ...@@ -935,10 +935,8 @@ void vmalloc_sync_all(void)
spin_unlock_irqrestore(&pgd_lock, flags); spin_unlock_irqrestore(&pgd_lock, flags);
} }
#else /* CONFIG_X86_64 */ #else /* CONFIG_X86_64 */
unsigned long start = VMALLOC_START & PGDIR_MASK; for (address = VMALLOC_START & PGDIR_MASK; address <= VMALLOC_END;
unsigned long address; address += PGDIR_SIZE) {
for (address = start; address <= VMALLOC_END; address += PGDIR_SIZE) {
const pgd_t *pgd_ref = pgd_offset_k(address); const pgd_t *pgd_ref = pgd_offset_k(address);
unsigned long flags; unsigned long flags;
struct page *page; struct page *page;
......
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <asm/asm.h> #include <asm/asm.h>
#include <asm/bios_ebda.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
...@@ -969,6 +970,8 @@ void __init mem_init(void) ...@@ -969,6 +970,8 @@ void __init mem_init(void)
int codesize, reservedpages, datasize, initsize; int codesize, reservedpages, datasize, initsize;
int tmp; int tmp;
start_periodic_check_for_corruption();
#ifdef CONFIG_FLATMEM #ifdef CONFIG_FLATMEM
BUG_ON(!mem_map); BUG_ON(!mem_map);
#endif #endif
......
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
#include <linux/nmi.h> #include <linux/nmi.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/bios_ebda.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
...@@ -881,6 +882,8 @@ void __init mem_init(void) ...@@ -881,6 +882,8 @@ void __init mem_init(void)
{ {
long codesize, reservedpages, datasize, initsize; long codesize, reservedpages, datasize, initsize;
start_periodic_check_for_corruption();
pci_iommu_alloc(); pci_iommu_alloc();
/* clear_bss() already clear the empty_zero_page */ /* clear_bss() already clear the empty_zero_page */
......
...@@ -24,18 +24,26 @@ ...@@ -24,18 +24,26 @@
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
unsigned long __phys_addr(unsigned long x) static inline int phys_addr_valid(unsigned long addr)
{ {
if (x >= __START_KERNEL_map) return addr < (1UL << boot_cpu_data.x86_phys_bits);
return x - __START_KERNEL_map + phys_base;
return x - PAGE_OFFSET;
} }
EXPORT_SYMBOL(__phys_addr);
static inline int phys_addr_valid(unsigned long addr) unsigned long __phys_addr(unsigned long x)
{ {
return addr < (1UL << boot_cpu_data.x86_phys_bits); if (x >= __START_KERNEL_map) {
x -= __START_KERNEL_map;
VIRTUAL_BUG_ON(x >= KERNEL_IMAGE_SIZE);
x += phys_base;
} else {
VIRTUAL_BUG_ON(x < PAGE_OFFSET);
x -= PAGE_OFFSET;
VIRTUAL_BUG_ON(system_state == SYSTEM_BOOTING ? x > MAXMEM :
!phys_addr_valid(x));
}
return x;
} }
EXPORT_SYMBOL(__phys_addr);
#else #else
...@@ -44,6 +52,17 @@ static inline int phys_addr_valid(unsigned long addr) ...@@ -44,6 +52,17 @@ static inline int phys_addr_valid(unsigned long addr)
return 1; return 1;
} }
#ifdef CONFIG_DEBUG_VIRTUAL
unsigned long __phys_addr(unsigned long x)
{
/* VMALLOC_* aren't constants; not available at the boot time */
VIRTUAL_BUG_ON(x < PAGE_OFFSET || (system_state != SYSTEM_BOOTING &&
is_vmalloc_addr((void *)x)));
return x - PAGE_OFFSET;
}
EXPORT_SYMBOL(__phys_addr);
#endif
#endif #endif
int page_is_ram(unsigned long pagenr) int page_is_ram(unsigned long pagenr)
......
...@@ -26,5 +26,13 @@ config XEN_MAX_DOMAIN_MEMORY ...@@ -26,5 +26,13 @@ config XEN_MAX_DOMAIN_MEMORY
config XEN_SAVE_RESTORE config XEN_SAVE_RESTORE
bool bool
depends on PM depends on XEN && PM
default y default y
\ No newline at end of file
config XEN_DEBUG_FS
bool "Enable Xen debug and tuning parameters in debugfs"
depends on XEN && DEBUG_FS
default n
help
Enable statistics output and various tuning options in debugfs.
Enabling this option may incur a significant performance overhead.
obj-y := enlighten.o setup.o multicalls.o mmu.o \ ifdef CONFIG_FTRACE
# Do not profile debug and lowlevel utilities
CFLAGS_REMOVE_spinlock.o = -pg
CFLAGS_REMOVE_time.o = -pg
CFLAGS_REMOVE_irq.o = -pg
endif
obj-y := enlighten.o setup.o multicalls.o mmu.o irq.o \
time.o xen-asm_$(BITS).o grant-table.o suspend.o time.o xen-asm_$(BITS).o grant-table.o suspend.o
obj-$(CONFIG_SMP) += smp.o obj-$(CONFIG_SMP) += smp.o spinlock.o
obj-$(CONFIG_XEN_DEBUG_FS) += debugfs.o
\ No newline at end of file
#include <linux/init.h>
#include <linux/debugfs.h>
#include <linux/module.h>
#include "debugfs.h"
static struct dentry *d_xen_debug;
struct dentry * __init xen_init_debugfs(void)
{
if (!d_xen_debug) {
d_xen_debug = debugfs_create_dir("xen", NULL);
if (!d_xen_debug)
pr_warning("Could not create 'xen' debugfs directory\n");
}
return d_xen_debug;
}
struct array_data
{
void *array;
unsigned elements;
};
static int u32_array_open(struct inode *inode, struct file *file)
{
file->private_data = NULL;
return nonseekable_open(inode, file);
}
static size_t format_array(char *buf, size_t bufsize, const char *fmt,
u32 *array, unsigned array_size)
{
size_t ret = 0;
unsigned i;
for(i = 0; i < array_size; i++) {
size_t len;
len = snprintf(buf, bufsize, fmt, array[i]);
len++; /* ' ' or '\n' */
ret += len;
if (buf) {
buf += len;
bufsize -= len;
buf[-1] = (i == array_size-1) ? '\n' : ' ';
}
}
ret++; /* \0 */
if (buf)
*buf = '\0';
return ret;
}
static char *format_array_alloc(const char *fmt, u32 *array, unsigned array_size)
{
size_t len = format_array(NULL, 0, fmt, array, array_size);
char *ret;
ret = kmalloc(len, GFP_KERNEL);
if (ret == NULL)
return NULL;
format_array(ret, len, fmt, array, array_size);
return ret;
}
static ssize_t u32_array_read(struct file *file, char __user *buf, size_t len,
loff_t *ppos)
{
struct inode *inode = file->f_path.dentry->d_inode;
struct array_data *data = inode->i_private;
size_t size;
if (*ppos == 0) {
if (file->private_data) {
kfree(file->private_data);
file->private_data = NULL;
}
file->private_data = format_array_alloc("%u", data->array, data->elements);
}
size = 0;
if (file->private_data)
size = strlen(file->private_data);
return simple_read_from_buffer(buf, len, ppos, file->private_data, size);
}
static int xen_array_release(struct inode *inode, struct file *file)
{
kfree(file->private_data);
return 0;
}
static struct file_operations u32_array_fops = {
.owner = THIS_MODULE,
.open = u32_array_open,
.release= xen_array_release,
.read = u32_array_read,
};
struct dentry *xen_debugfs_create_u32_array(const char *name, mode_t mode,
struct dentry *parent,
u32 *array, unsigned elements)
{
struct array_data *data = kmalloc(sizeof(*data), GFP_KERNEL);
if (data == NULL)
return NULL;
data->array = array;
data->elements = elements;
return debugfs_create_file(name, mode, parent, data, &u32_array_fops);
}
#ifndef _XEN_DEBUGFS_H
#define _XEN_DEBUGFS_H
struct dentry * __init xen_init_debugfs(void);
struct dentry *xen_debugfs_create_u32_array(const char *name, mode_t mode,
struct dentry *parent,
u32 *array, unsigned elements);
#endif /* _XEN_DEBUGFS_H */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -18,9 +18,6 @@ void xen_activate_mm(struct mm_struct *prev, struct mm_struct *next); ...@@ -18,9 +18,6 @@ void xen_activate_mm(struct mm_struct *prev, struct mm_struct *next);
void xen_dup_mmap(struct mm_struct *oldmm, struct mm_struct *mm); void xen_dup_mmap(struct mm_struct *oldmm, struct mm_struct *mm);
void xen_exit_mmap(struct mm_struct *mm); void xen_exit_mmap(struct mm_struct *mm);
void xen_pgd_pin(pgd_t *pgd);
//void xen_pgd_unpin(pgd_t *pgd);
pteval_t xen_pte_val(pte_t); pteval_t xen_pte_val(pte_t);
pmdval_t xen_pmd_val(pmd_t); pmdval_t xen_pmd_val(pmd_t);
pgdval_t xen_pgd_val(pgd_t); pgdval_t xen_pgd_val(pgd_t);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -30,8 +30,6 @@ ...@@ -30,8 +30,6 @@
#define TIMER_SLOP 100000 #define TIMER_SLOP 100000
#define NS_PER_TICK (1000000000LL / HZ) #define NS_PER_TICK (1000000000LL / HZ)
static cycle_t xen_clocksource_read(void);
/* runstate info updated by Xen */ /* runstate info updated by Xen */
static DEFINE_PER_CPU(struct vcpu_runstate_info, runstate); static DEFINE_PER_CPU(struct vcpu_runstate_info, runstate);
...@@ -213,7 +211,7 @@ unsigned long xen_tsc_khz(void) ...@@ -213,7 +211,7 @@ unsigned long xen_tsc_khz(void)
return xen_khz; return xen_khz;
} }
static cycle_t xen_clocksource_read(void) cycle_t xen_clocksource_read(void)
{ {
struct pvclock_vcpu_time_info *src; struct pvclock_vcpu_time_info *src;
cycle_t ret; cycle_t ret;
...@@ -452,6 +450,14 @@ void xen_setup_timer(int cpu) ...@@ -452,6 +450,14 @@ void xen_setup_timer(int cpu)
setup_runstate_info(cpu); setup_runstate_info(cpu);
} }
void xen_teardown_timer(int cpu)
{
struct clock_event_device *evt;
BUG_ON(cpu == 0);
evt = &per_cpu(xen_clock_events, cpu);
unbind_from_irqhandler(evt->irq, NULL);
}
void xen_setup_cpu_clockevents(void) void xen_setup_cpu_clockevents(void)
{ {
BUG_ON(preemptible()); BUG_ON(preemptible());
......
...@@ -298,7 +298,7 @@ check_events: ...@@ -298,7 +298,7 @@ check_events:
push %eax push %eax
push %ecx push %ecx
push %edx push %edx
call force_evtchn_callback call xen_force_evtchn_callback
pop %edx pop %edx
pop %ecx pop %ecx
pop %eax pop %eax
......
This diff is collapsed.
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
#define XEN_OPS_H #define XEN_OPS_H
#include <linux/init.h> #include <linux/init.h>
#include <linux/clocksource.h>
#include <linux/irqreturn.h> #include <linux/irqreturn.h>
#include <xen/xen-ops.h> #include <xen/xen-ops.h>
...@@ -31,7 +32,10 @@ void xen_vcpu_restore(void); ...@@ -31,7 +32,10 @@ void xen_vcpu_restore(void);
void __init xen_build_dynamic_phys_to_machine(void); void __init xen_build_dynamic_phys_to_machine(void);
void xen_init_irq_ops(void);
void xen_setup_timer(int cpu); void xen_setup_timer(int cpu);
void xen_teardown_timer(int cpu);
cycle_t xen_clocksource_read(void);
void xen_setup_cpu_clockevents(void); void xen_setup_cpu_clockevents(void);
unsigned long xen_tsc_khz(void); unsigned long xen_tsc_khz(void);
void __init xen_time_init(void); void __init xen_time_init(void);
...@@ -50,6 +54,10 @@ void __init xen_setup_vcpu_info_placement(void); ...@@ -50,6 +54,10 @@ void __init xen_setup_vcpu_info_placement(void);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
void xen_smp_init(void); void xen_smp_init(void);
void __init xen_init_spinlocks(void);
__cpuinit void xen_init_lock_cpu(int cpu);
void xen_uninit_lock_cpu(int cpu);
extern cpumask_t xen_cpu_initialized_map; extern cpumask_t xen_cpu_initialized_map;
#else #else
static inline void xen_smp_init(void) {} static inline void xen_smp_init(void) {}
......
...@@ -1066,7 +1066,7 @@ static struct xenbus_driver blkfront = { ...@@ -1066,7 +1066,7 @@ static struct xenbus_driver blkfront = {
static int __init xlblk_init(void) static int __init xlblk_init(void)
{ {
if (!is_running_on_xen()) if (!xen_domain())
return -ENODEV; return -ENODEV;
if (register_blkdev(XENVBD_MAJOR, DEV_NAME)) { if (register_blkdev(XENVBD_MAJOR, DEV_NAME)) {
......
This diff is collapsed.
...@@ -335,11 +335,11 @@ static struct xenbus_driver xenkbd = { ...@@ -335,11 +335,11 @@ static struct xenbus_driver xenkbd = {
static int __init xenkbd_init(void) static int __init xenkbd_init(void)
{ {
if (!is_running_on_xen()) if (!xen_domain())
return -ENODEV; return -ENODEV;
/* Nothing to do if running in dom0. */ /* Nothing to do if running in dom0. */
if (is_initial_xendomain()) if (xen_initial_domain())
return -ENODEV; return -ENODEV;
return xenbus_register_frontend(&xenkbd); return xenbus_register_frontend(&xenkbd);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment