Commit 414f827c authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://one.firstfloor.org/home/andi/git/linux-2.6

* 'for-linus' of git://one.firstfloor.org/home/andi/git/linux-2.6: (94 commits)
  [PATCH] x86-64: Remove mk_pte_phys()
  [PATCH] i386: Fix broken CONFIG_COMPAT_VDSO on i386
  [PATCH] i386: fix 32-bit ioctls on x64_32
  [PATCH] x86: Unify pcspeaker platform device code between i386/x86-64
  [PATCH] i386: Remove extern declaration from mm/discontig.c, put in header.
  [PATCH] i386: Rename cpu_gdt_descr and remove extern declaration from smpboot.c
  [PATCH] i386: Move mce_disabled to asm/mce.h
  [PATCH] i386: paravirt unhandled fallthrough
  [PATCH] x86_64: Wire up compat epoll_pwait
  [PATCH] x86: Don't require the vDSO for handling a.out signals
  [PATCH] i386: Fix Cyrix MediaGX detection
  [PATCH] i386: Fix warning in cpu initialization
  [PATCH] i386: Fix warning in microcode.c
  [PATCH] x86: Enable NMI watchdog for AMD Family 0x10 CPUs
  [PATCH] x86: Add new CPUID bits for AMD Family 10 CPUs in /proc/cpuinfo
  [PATCH] i386: Remove fastcall in paravirt.[ch]
  [PATCH] x86-64: Fix wrong gcc check in bitops.h
  [PATCH] x86-64: survive having no irq mapping for a vector
  [PATCH] i386: geode configuration fixes
  [PATCH] i386: add option to show more code in oops reports
  ...
parents 86a71dbd 126b1922
...@@ -104,6 +104,9 @@ loader, and have no meaning to the kernel directly. ...@@ -104,6 +104,9 @@ loader, and have no meaning to the kernel directly.
Do not modify the syntax of boot loader parameters without extreme Do not modify the syntax of boot loader parameters without extreme
need or coordination with <Documentation/i386/boot.txt>. need or coordination with <Documentation/i386/boot.txt>.
There are also arch-specific kernel-parameters not documented here.
See for example <Documentation/x86_64/boot-options.txt>.
Note that ALL kernel parameters listed below are CASE SENSITIVE, and that Note that ALL kernel parameters listed below are CASE SENSITIVE, and that
a trailing = on the name of any parameter states that that parameter will a trailing = on the name of any parameter states that that parameter will
be entered as an environment variable, whereas its absence indicates that be entered as an environment variable, whereas its absence indicates that
...@@ -361,6 +364,11 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -361,6 +364,11 @@ and is between 256 and 4096 characters. It is defined in the file
clocksource is not available, it defaults to PIT. clocksource is not available, it defaults to PIT.
Format: { pit | tsc | cyclone | pmtmr } Format: { pit | tsc | cyclone | pmtmr }
code_bytes [IA32] How many bytes of object code to print in an
oops report.
Range: 0 - 8192
Default: 64
disable_8254_timer disable_8254_timer
enable_8254_timer enable_8254_timer
[IA32/X86_64] Disable/Enable interrupt 0 timer routing [IA32/X86_64] Disable/Enable interrupt 0 timer routing
......
...@@ -180,40 +180,81 @@ PCI ...@@ -180,40 +180,81 @@ PCI
pci=lastbus=NUMBER Scan upto NUMBER busses, no matter what the mptable says. pci=lastbus=NUMBER Scan upto NUMBER busses, no matter what the mptable says.
pci=noacpi Don't use ACPI to set up PCI interrupt routing. pci=noacpi Don't use ACPI to set up PCI interrupt routing.
IOMMU IOMMU (input/output memory management unit)
iommu=[size][,noagp][,off][,force][,noforce][,leak][,memaper[=order]][,merge] Currently four x86-64 PCI-DMA mapping implementations exist:
[,forcesac][,fullflush][,nomerge][,noaperture][,calgary]
size set size of iommu (in bytes) 1. <arch/x86_64/kernel/pci-nommu.c>: use no hardware/software IOMMU at all
noagp don't initialize the AGP driver and use full aperture. (e.g. because you have < 3 GB memory).
off don't use the IOMMU Kernel boot message: "PCI-DMA: Disabling IOMMU"
leak turn on simple iommu leak tracing (only when CONFIG_IOMMU_LEAK is on)
memaper[=order] allocate an own aperture over RAM with size 32MB^order. 2. <arch/x86_64/kernel/pci-gart.c>: AMD GART based hardware IOMMU.
noforce don't force IOMMU usage. Default. Kernel boot message: "PCI-DMA: using GART IOMMU"
force Force IOMMU.
merge Do SG merging. Implies force (experimental) 3. <arch/x86_64/kernel/pci-swiotlb.c> : Software IOMMU implementation. Used
nomerge Don't do SG merging. e.g. if there is no hardware IOMMU in the system and it is need because
forcesac For SAC mode for masks <40bits (experimental) you have >3GB memory or told the kernel to us it (iommu=soft))
fullflush Flush IOMMU on each allocation (default) Kernel boot message: "PCI-DMA: Using software bounce buffering
nofullflush Don't use IOMMU fullflush for IO (SWIOTLB)"
allowed overwrite iommu off workarounds for specific chipsets.
soft Use software bounce buffering (default for Intel machines) 4. <arch/x86_64/pci-calgary.c> : IBM Calgary hardware IOMMU. Used in IBM
noaperture Don't touch the aperture for AGP. pSeries and xSeries servers. This hardware IOMMU supports DMA address
allowdac Allow DMA >4GB mapping with memory protection, etc.
When off all DMA over >4GB is forced through an IOMMU or bounce Kernel boot message: "PCI-DMA: Using Calgary IOMMU"
buffering.
nodac Forbid DMA >4GB iommu=[<size>][,noagp][,off][,force][,noforce][,leak[=<nr_of_leak_pages>]
panic Always panic when IOMMU overflows [,memaper[=<order>]][,merge][,forcesac][,fullflush][,nomerge]
calgary Use the Calgary IOMMU if it is available [,noaperture][,calgary]
swiotlb=pages[,force] General iommu options:
off Don't initialize and use any kind of IOMMU.
pages Prereserve that many 128K pages for the software IO bounce buffering. noforce Don't force hardware IOMMU usage when it is not needed.
force Force all IO through the software TLB. (default).
force Force the use of the hardware IOMMU even when it is
calgary=[64k,128k,256k,512k,1M,2M,4M,8M] not actually needed (e.g. because < 3 GB memory).
calgary=[translate_empty_slots] soft Use software bounce buffering (SWIOTLB) (default for
calgary=[disable=<PCI bus number>] Intel machines). This can be used to prevent the usage
of an available hardware IOMMU.
iommu options only relevant to the AMD GART hardware IOMMU:
<size> Set the size of the remapping area in bytes.
allowed Overwrite iommu off workarounds for specific chipsets.
fullflush Flush IOMMU on each allocation (default).
nofullflush Don't use IOMMU fullflush.
leak Turn on simple iommu leak tracing (only when
CONFIG_IOMMU_LEAK is on). Default number of leak pages
is 20.
memaper[=<order>] Allocate an own aperture over RAM with size 32MB<<order.
(default: order=1, i.e. 64MB)
merge Do scatter-gather (SG) merging. Implies "force"
(experimental).
nomerge Don't do scatter-gather (SG) merging.
noaperture Ask the IOMMU not to touch the aperture for AGP.
forcesac Force single-address cycle (SAC) mode for masks <40bits
(experimental).
noagp Don't initialize the AGP driver and use full aperture.
allowdac Allow double-address cycle (DAC) mode, i.e. DMA >4GB.
DAC is used with 32-bit PCI to push a 64-bit address in
two cycles. When off all DMA over >4GB is forced through
an IOMMU or software bounce buffering.
nodac Forbid DAC mode, i.e. DMA >4GB.
panic Always panic when IOMMU overflows.
calgary Use the Calgary IOMMU if it is available
iommu options only relevant to the software bounce buffering (SWIOTLB) IOMMU
implementation:
swiotlb=<pages>[,force]
<pages> Prereserve that many 128K pages for the software IO
bounce buffering.
force Force all IO through the software TLB.
Settings for the IBM Calgary hardware IOMMU currently found in IBM
pSeries and xSeries machines:
calgary=[64k,128k,256k,512k,1M,2M,4M,8M]
calgary=[translate_empty_slots]
calgary=[disable=<PCI bus number>]
panic Always panic when IOMMU overflows
64k,...,8M - Set the size of each PCI slot's translation table 64k,...,8M - Set the size of each PCI slot's translation table
when using the Calgary IOMMU. This is the size of the translation when using the Calgary IOMMU. This is the size of the translation
...@@ -234,14 +275,14 @@ IOMMU ...@@ -234,14 +275,14 @@ IOMMU
Debugging Debugging
oops=panic Always panic on oopses. Default is to just kill the process, oops=panic Always panic on oopses. Default is to just kill the process,
but there is a small probability of deadlocking the machine. but there is a small probability of deadlocking the machine.
This will also cause panics on machine check exceptions. This will also cause panics on machine check exceptions.
Useful together with panic=30 to trigger a reboot. Useful together with panic=30 to trigger a reboot.
kstack=N Print that many words from the kernel stack in oops dumps. kstack=N Print N words from the kernel stack in oops dumps.
pagefaulttrace Dump all page faults. Only useful for extreme debugging pagefaulttrace Dump all page faults. Only useful for extreme debugging
and will create a lot of output. and will create a lot of output.
call_trace=[old|both|newfallback|new] call_trace=[old|both|newfallback|new]
...@@ -251,15 +292,8 @@ Debugging ...@@ -251,15 +292,8 @@ Debugging
newfallback: use new unwinder but fall back to old if it gets newfallback: use new unwinder but fall back to old if it gets
stuck (default) stuck (default)
call_trace=[old|both|newfallback|new] Miscellaneous
old: use old inexact backtracer
new: use new exact dwarf2 unwinder
both: print entries from both
newfallback: use new unwinder but fall back to old if it gets
stuck (default)
Misc
noreplacement Don't replace instructions with more appropriate ones noreplacement Don't replace instructions with more appropriate ones
for the CPU. This may be useful on asymmetric MP systems for the CPU. This may be useful on asymmetric MP systems
where some CPU have less capabilities than the others. where some CPUs have less capabilities than others.
...@@ -2,7 +2,7 @@ Firmware support for CPU hotplug under Linux/x86-64 ...@@ -2,7 +2,7 @@ Firmware support for CPU hotplug under Linux/x86-64
--------------------------------------------------- ---------------------------------------------------
Linux/x86-64 supports CPU hotplug now. For various reasons Linux wants to Linux/x86-64 supports CPU hotplug now. For various reasons Linux wants to
know in advance boot time the maximum number of CPUs that could be plugged know in advance of boot time the maximum number of CPUs that could be plugged
into the system. ACPI 3.0 currently has no official way to supply into the system. ACPI 3.0 currently has no official way to supply
this information from the firmware to the operating system. this information from the firmware to the operating system.
......
...@@ -9,9 +9,9 @@ zombie. While the thread is in user space the kernel stack is empty ...@@ -9,9 +9,9 @@ zombie. While the thread is in user space the kernel stack is empty
except for the thread_info structure at the bottom. except for the thread_info structure at the bottom.
In addition to the per thread stacks, there are specialized stacks In addition to the per thread stacks, there are specialized stacks
associated with each cpu. These stacks are only used while the kernel associated with each CPU. These stacks are only used while the kernel
is in control on that cpu, when a cpu returns to user space the is in control on that CPU; when a CPU returns to user space the
specialized stacks contain no useful data. The main cpu stacks is specialized stacks contain no useful data. The main CPU stacks are:
* Interrupt stack. IRQSTACKSIZE * Interrupt stack. IRQSTACKSIZE
...@@ -32,17 +32,17 @@ x86_64 also has a feature which is not available on i386, the ability ...@@ -32,17 +32,17 @@ x86_64 also has a feature which is not available on i386, the ability
to automatically switch to a new stack for designated events such as to automatically switch to a new stack for designated events such as
double fault or NMI, which makes it easier to handle these unusual double fault or NMI, which makes it easier to handle these unusual
events on x86_64. This feature is called the Interrupt Stack Table events on x86_64. This feature is called the Interrupt Stack Table
(IST). There can be up to 7 IST entries per cpu. The IST code is an (IST). There can be up to 7 IST entries per CPU. The IST code is an
index into the Task State Segment (TSS), the IST entries in the TSS index into the Task State Segment (TSS). The IST entries in the TSS
point to dedicated stacks, each stack can be a different size. point to dedicated stacks; each stack can be a different size.
An IST is selected by an non-zero value in the IST field of an An IST is selected by a non-zero value in the IST field of an
interrupt-gate descriptor. When an interrupt occurs and the hardware interrupt-gate descriptor. When an interrupt occurs and the hardware
loads such a descriptor, the hardware automatically sets the new stack loads such a descriptor, the hardware automatically sets the new stack
pointer based on the IST value, then invokes the interrupt handler. If pointer based on the IST value, then invokes the interrupt handler. If
software wants to allow nested IST interrupts then the handler must software wants to allow nested IST interrupts then the handler must
adjust the IST values on entry to and exit from the interrupt handler. adjust the IST values on entry to and exit from the interrupt handler.
(this is occasionally done, e.g. for debug exceptions) (This is occasionally done, e.g. for debug exceptions.)
Events with different IST codes (i.e. with different stacks) can be Events with different IST codes (i.e. with different stacks) can be
nested. For example, a debug interrupt can safely be interrupted by an nested. For example, a debug interrupt can safely be interrupted by an
...@@ -58,17 +58,17 @@ The currently assigned IST stacks are :- ...@@ -58,17 +58,17 @@ The currently assigned IST stacks are :-
Used for interrupt 12 - Stack Fault Exception (#SS). Used for interrupt 12 - Stack Fault Exception (#SS).
This allows to recover from invalid stack segments. Rarely This allows the CPU to recover from invalid stack segments. Rarely
happens. happens.
* DOUBLEFAULT_STACK. EXCEPTION_STKSZ (PAGE_SIZE). * DOUBLEFAULT_STACK. EXCEPTION_STKSZ (PAGE_SIZE).
Used for interrupt 8 - Double Fault Exception (#DF). Used for interrupt 8 - Double Fault Exception (#DF).
Invoked when handling a exception causes another exception. Happens Invoked when handling one exception causes another exception. Happens
when the kernel is very confused (e.g. kernel stack pointer corrupt) when the kernel is very confused (e.g. kernel stack pointer corrupt).
Using a separate stack allows to recover from it well enough in many Using a separate stack allows the kernel to recover from it well enough
cases to still output an oops. in many cases to still output an oops.
* NMI_STACK. EXCEPTION_STKSZ (PAGE_SIZE). * NMI_STACK. EXCEPTION_STKSZ (PAGE_SIZE).
......
Configurable sysfs parameters for the x86-64 machine check code.
Machine checks report internal hardware error conditions detected
by the CPU. Uncorrected errors typically cause a machine check
(often with panic), corrected ones cause a machine check log entry.
Machine checks are organized in banks (normally associated with
a hardware subsystem) and subevents in a bank. The exact meaning
of the banks and subevent is CPU specific.
mcelog knows how to decode them.
When you see the "Machine check errors logged" message in the system
log then mcelog should run to collect and decode machine check entries
from /dev/mcelog. Normally mcelog should be run regularly from a cronjob.
Each CPU has a directory in /sys/devices/system/machinecheck/machinecheckN
(N = CPU number)
The directory contains some configurable entries:
Entries:
bankNctl
(N bank number)
64bit Hex bitmask enabling/disabling specific subevents for bank N
When a bit in the bitmask is zero then the respective
subevent will not be reported.
By default all events are enabled.
Note that BIOS maintain another mask to disable specific events
per bank. This is not visible here
The following entries appear for each CPU, but they are truly shared
between all CPUs.
check_interval
How often to poll for corrected machine check errors, in seconds
(Note output is hexademical). Default 5 minutes.
tolerant
Tolerance level. When a machine check exception occurs for a non
corrected machine check the kernel can take different actions.
Since machine check exceptions can happen any time it is sometimes
risky for the kernel to kill a process because it defies
normal kernel locking rules. The tolerance level configures
how hard the kernel tries to recover even at some risk of deadlock.
0: always panic,
1: panic if deadlock possible,
2: try to avoid panic,
3: never panic or exit (for testing only)
Default: 1
Note this only makes a difference if the CPU allows recovery
from a machine check exception. Current x86 CPUs generally do not.
trigger
Program to run when a machine check event is detected.
This is an alternative to running mcelog regularly from cron
and allows to detect events faster.
TBD document entries for AMD threshold interrupt configuration
For more details about the x86 machine check architecture
see the Intel and AMD architecture manuals from their developer websites.
For more details about the architecture see
see http://one.firstfloor.org/~andi/mce.pdf
...@@ -3,26 +3,26 @@ ...@@ -3,26 +3,26 @@
Virtual memory map with 4 level page tables: Virtual memory map with 4 level page tables:
0000000000000000 - 00007fffffffffff (=47bits) user space, different per mm 0000000000000000 - 00007fffffffffff (=47 bits) user space, different per mm
hole caused by [48:63] sign extension hole caused by [48:63] sign extension
ffff800000000000 - ffff80ffffffffff (=40bits) guard hole ffff800000000000 - ffff80ffffffffff (=40 bits) guard hole
ffff810000000000 - ffffc0ffffffffff (=46bits) direct mapping of all phys. memory ffff810000000000 - ffffc0ffffffffff (=46 bits) direct mapping of all phys. memory
ffffc10000000000 - ffffc1ffffffffff (=40bits) hole ffffc10000000000 - ffffc1ffffffffff (=40 bits) hole
ffffc20000000000 - ffffe1ffffffffff (=45bits) vmalloc/ioremap space ffffc20000000000 - ffffe1ffffffffff (=45 bits) vmalloc/ioremap space
... unused hole ... ... unused hole ...
ffffffff80000000 - ffffffff82800000 (=40MB) kernel text mapping, from phys 0 ffffffff80000000 - ffffffff82800000 (=40 MB) kernel text mapping, from phys 0
... unused hole ... ... unused hole ...
ffffffff88000000 - fffffffffff00000 (=1919MB) module mapping space ffffffff88000000 - fffffffffff00000 (=1919 MB) module mapping space
The direct mapping covers all memory in the system upto the highest The direct mapping covers all memory in the system up to the highest
memory address (this means in some cases it can also include PCI memory memory address (this means in some cases it can also include PCI memory
holes) holes).
vmalloc space is lazily synchronized into the different PML4 pages of vmalloc space is lazily synchronized into the different PML4 pages of
the processes using the page fault handler, with init_level4_pgt as the processes using the page fault handler, with init_level4_pgt as
reference. reference.
Current X86-64 implementations only support 40 bit of address space, Current X86-64 implementations only support 40 bits of address space,
but we support upto 46bits. This expands into MBZ space in the page tables. but we support up to 46 bits. This expands into MBZ space in the page tables.
-Andi Kleen, Jul 2004 -Andi Kleen, Jul 2004
...@@ -3779,6 +3779,7 @@ P: Andi Kleen ...@@ -3779,6 +3779,7 @@ P: Andi Kleen
M: ak@suse.de M: ak@suse.de
L: discuss@x86-64.org L: discuss@x86-64.org
W: http://www.x86-64.org W: http://www.x86-64.org
T: quilt ftp://ftp.firstfloor.org/pub/ak/x86_64/quilt-current
S: Maintained S: Maintained
YAM DRIVER FOR AX.25 YAM DRIVER FOR AX.25
......
...@@ -203,6 +203,15 @@ config PARAVIRT ...@@ -203,6 +203,15 @@ config PARAVIRT
However, when run without a hypervisor the kernel is However, when run without a hypervisor the kernel is
theoretically slower. If in doubt, say N. theoretically slower. If in doubt, say N.
config VMI
bool "VMI Paravirt-ops support"
depends on PARAVIRT
default y
help
VMI provides a paravirtualized interface to multiple hypervisors
include VMware ESX server and Xen by connecting to a ROM module
provided by the hypervisor.
config ACPI_SRAT config ACPI_SRAT
bool bool
default y default y
...@@ -1263,3 +1272,12 @@ config X86_TRAMPOLINE ...@@ -1263,3 +1272,12 @@ config X86_TRAMPOLINE
config KTIME_SCALAR config KTIME_SCALAR
bool bool
default y default y
config NO_IDLE_HZ
bool
depends on PARAVIRT
default y
help
Switches the regular HZ timer off when the system is going idle.
This helps a hypervisor detect that the Linux system is idle,
reducing the overhead of idle systems.
...@@ -226,11 +226,6 @@ config X86_CMPXCHG ...@@ -226,11 +226,6 @@ config X86_CMPXCHG
depends on !M386 depends on !M386
default y default y
config X86_XADD
bool
depends on !M386
default y
config X86_L1_CACHE_SHIFT config X86_L1_CACHE_SHIFT
int int
default "7" if MPENTIUM4 || X86_GENERIC default "7" if MPENTIUM4 || X86_GENERIC
......
...@@ -87,7 +87,7 @@ config DOUBLEFAULT ...@@ -87,7 +87,7 @@ config DOUBLEFAULT
config DEBUG_PARAVIRT config DEBUG_PARAVIRT
bool "Enable some paravirtualization debugging" bool "Enable some paravirtualization debugging"
default y default n
depends on PARAVIRT && DEBUG_KERNEL depends on PARAVIRT && DEBUG_KERNEL
help help
Currently deliberately clobbers regs which are allowed to be Currently deliberately clobbers regs which are allowed to be
......
# #
# Automatically generated make config: don't edit # Automatically generated make config: don't edit
# Linux kernel version: 2.6.20-rc3 # Linux kernel version: 2.6.20-git8
# Fri Jan 5 11:54:46 2007 # Tue Feb 13 11:25:18 2007
# #
CONFIG_X86_32=y CONFIG_X86_32=y
CONFIG_GENERIC_TIME=y CONFIG_GENERIC_TIME=y
...@@ -10,6 +10,7 @@ CONFIG_STACKTRACE_SUPPORT=y ...@@ -10,6 +10,7 @@ CONFIG_STACKTRACE_SUPPORT=y
CONFIG_SEMAPHORE_SLEEPERS=y CONFIG_SEMAPHORE_SLEEPERS=y
CONFIG_X86=y CONFIG_X86=y
CONFIG_MMU=y CONFIG_MMU=y
CONFIG_ZONE_DMA=y
CONFIG_GENERIC_ISA_DMA=y CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_IOMAP=y CONFIG_GENERIC_IOMAP=y
CONFIG_GENERIC_BUG=y CONFIG_GENERIC_BUG=y
...@@ -139,7 +140,6 @@ CONFIG_MPENTIUMIII=y ...@@ -139,7 +140,6 @@ CONFIG_MPENTIUMIII=y
# CONFIG_MVIAC3_2 is not set # CONFIG_MVIAC3_2 is not set
CONFIG_X86_GENERIC=y CONFIG_X86_GENERIC=y
CONFIG_X86_CMPXCHG=y CONFIG_X86_CMPXCHG=y
CONFIG_X86_XADD=y
CONFIG_X86_L1_CACHE_SHIFT=7 CONFIG_X86_L1_CACHE_SHIFT=7
CONFIG_RWSEM_XCHGADD_ALGORITHM=y CONFIG_RWSEM_XCHGADD_ALGORITHM=y
# CONFIG_ARCH_HAS_ILOG2_U32 is not set # CONFIG_ARCH_HAS_ILOG2_U32 is not set
...@@ -198,6 +198,7 @@ CONFIG_FLAT_NODE_MEM_MAP=y ...@@ -198,6 +198,7 @@ CONFIG_FLAT_NODE_MEM_MAP=y
# CONFIG_SPARSEMEM_STATIC is not set # CONFIG_SPARSEMEM_STATIC is not set
CONFIG_SPLIT_PTLOCK_CPUS=4 CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_RESOURCES_64BIT=y CONFIG_RESOURCES_64BIT=y
CONFIG_ZONE_DMA_FLAG=1
# CONFIG_HIGHPTE is not set # CONFIG_HIGHPTE is not set
# CONFIG_MATH_EMULATION is not set # CONFIG_MATH_EMULATION is not set
CONFIG_MTRR=y CONFIG_MTRR=y
...@@ -211,6 +212,7 @@ CONFIG_HZ_250=y ...@@ -211,6 +212,7 @@ CONFIG_HZ_250=y
CONFIG_HZ=250 CONFIG_HZ=250
# CONFIG_KEXEC is not set # CONFIG_KEXEC is not set
# CONFIG_CRASH_DUMP is not set # CONFIG_CRASH_DUMP is not set
CONFIG_PHYSICAL_START=0x100000
# CONFIG_RELOCATABLE is not set # CONFIG_RELOCATABLE is not set
CONFIG_PHYSICAL_ALIGN=0x100000 CONFIG_PHYSICAL_ALIGN=0x100000
# CONFIG_HOTPLUG_CPU is not set # CONFIG_HOTPLUG_CPU is not set
...@@ -229,13 +231,14 @@ CONFIG_PM_SYSFS_DEPRECATED=y ...@@ -229,13 +231,14 @@ CONFIG_PM_SYSFS_DEPRECATED=y
# ACPI (Advanced Configuration and Power Interface) Support # ACPI (Advanced Configuration and Power Interface) Support
# #
CONFIG_ACPI=y CONFIG_ACPI=y
CONFIG_ACPI_PROCFS=y
CONFIG_ACPI_AC=y CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y CONFIG_ACPI_BUTTON=y
# CONFIG_ACPI_VIDEO is not set
# CONFIG_ACPI_HOTKEY is not set # CONFIG_ACPI_HOTKEY is not set
CONFIG_ACPI_FAN=y CONFIG_ACPI_FAN=y
# CONFIG_ACPI_DOCK is not set # CONFIG_ACPI_DOCK is not set
# CONFIG_ACPI_BAY is not set
CONFIG_ACPI_PROCESSOR=y CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_THERMAL=y CONFIG_ACPI_THERMAL=y
# CONFIG_ACPI_ASUS is not set # CONFIG_ACPI_ASUS is not set
...@@ -306,7 +309,6 @@ CONFIG_PCI_DIRECT=y ...@@ -306,7 +309,6 @@ CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y CONFIG_PCI_MMCONFIG=y
# CONFIG_PCIEPORTBUS is not set # CONFIG_PCIEPORTBUS is not set
CONFIG_PCI_MSI=y CONFIG_PCI_MSI=y
# CONFIG_PCI_MULTITHREAD_PROBE is not set
# CONFIG_PCI_DEBUG is not set # CONFIG_PCI_DEBUG is not set
# CONFIG_HT_IRQ is not set # CONFIG_HT_IRQ is not set
CONFIG_ISA_DMA_API=y CONFIG_ISA_DMA_API=y
...@@ -347,6 +349,7 @@ CONFIG_UNIX=y ...@@ -347,6 +349,7 @@ CONFIG_UNIX=y
CONFIG_XFRM=y CONFIG_XFRM=y
# CONFIG_XFRM_USER is not set # CONFIG_XFRM_USER is not set
# CONFIG_XFRM_SUB_POLICY is not set # CONFIG_XFRM_SUB_POLICY is not set
# CONFIG_XFRM_MIGRATE is not set
# CONFIG_NET_KEY is not set # CONFIG_NET_KEY is not set
CONFIG_INET=y CONFIG_INET=y
CONFIG_IP_MULTICAST=y CONFIG_IP_MULTICAST=y
...@@ -446,6 +449,7 @@ CONFIG_STANDALONE=y ...@@ -446,6 +449,7 @@ CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y CONFIG_FW_LOADER=y
# CONFIG_DEBUG_DRIVER is not set # CONFIG_DEBUG_DRIVER is not set
# CONFIG_DEBUG_DEVRES is not set
# CONFIG_SYS_HYPERVISOR is not set # CONFIG_SYS_HYPERVISOR is not set
# #
...@@ -466,8 +470,7 @@ CONFIG_FW_LOADER=y ...@@ -466,8 +470,7 @@ CONFIG_FW_LOADER=y
# #
# Plug and Play support # Plug and Play support
# #
CONFIG_PNP=y # CONFIG_PNP is not set
CONFIG_PNPACPI=y
# #
# Block devices # Block devices
...@@ -515,6 +518,7 @@ CONFIG_BLK_DEV_IDECD=y ...@@ -515,6 +518,7 @@ CONFIG_BLK_DEV_IDECD=y
# CONFIG_BLK_DEV_IDETAPE is not set # CONFIG_BLK_DEV_IDETAPE is not set
# CONFIG_BLK_DEV_IDEFLOPPY is not set # CONFIG_BLK_DEV_IDEFLOPPY is not set
# CONFIG_BLK_DEV_IDESCSI is not set # CONFIG_BLK_DEV_IDESCSI is not set
CONFIG_BLK_DEV_IDEACPI=y
# CONFIG_IDE_TASK_IOCTL is not set # CONFIG_IDE_TASK_IOCTL is not set
# #
...@@ -547,6 +551,7 @@ CONFIG_BLK_DEV_AMD74XX=y ...@@ -547,6 +551,7 @@ CONFIG_BLK_DEV_AMD74XX=y
# CONFIG_BLK_DEV_JMICRON is not set # CONFIG_BLK_DEV_JMICRON is not set
# CONFIG_BLK_DEV_SC1200 is not set # CONFIG_BLK_DEV_SC1200 is not set
CONFIG_BLK_DEV_PIIX=y CONFIG_BLK_DEV_PIIX=y
# CONFIG_BLK_DEV_IT8213 is not set
# CONFIG_BLK_DEV_IT821X is not set # CONFIG_BLK_DEV_IT821X is not set
# CONFIG_BLK_DEV_NS87415 is not set # CONFIG_BLK_DEV_NS87415 is not set
# CONFIG_BLK_DEV_PDC202XX_OLD is not set # CONFIG_BLK_DEV_PDC202XX_OLD is not set
...@@ -557,6 +562,7 @@ CONFIG_BLK_DEV_PIIX=y ...@@ -557,6 +562,7 @@ CONFIG_BLK_DEV_PIIX=y
# CONFIG_BLK_DEV_SLC90E66 is not set # CONFIG_BLK_DEV_SLC90E66 is not set
# CONFIG_BLK_DEV_TRM290 is not set # CONFIG_BLK_DEV_TRM290 is not set
# CONFIG_BLK_DEV_VIA82CXXX is not set # CONFIG_BLK_DEV_VIA82CXXX is not set
# CONFIG_BLK_DEV_TC86C001 is not set
# CONFIG_IDE_ARM is not set # CONFIG_IDE_ARM is not set
CONFIG_BLK_DEV_IDEDMA=y CONFIG_BLK_DEV_IDEDMA=y
# CONFIG_IDEDMA_IVB is not set # CONFIG_IDEDMA_IVB is not set
...@@ -655,6 +661,7 @@ CONFIG_AIC79XX_DEBUG_MASK=0 ...@@ -655,6 +661,7 @@ CONFIG_AIC79XX_DEBUG_MASK=0
# Serial ATA (prod) and Parallel ATA (experimental) drivers # Serial ATA (prod) and Parallel ATA (experimental) drivers
# #
CONFIG_ATA=y CONFIG_ATA=y
# CONFIG_ATA_NONSTANDARD is not set
CONFIG_SATA_AHCI=y CONFIG_SATA_AHCI=y
CONFIG_SATA_SVW=y CONFIG_SATA_SVW=y
CONFIG_ATA_PIIX=y CONFIG_ATA_PIIX=y
...@@ -670,6 +677,7 @@ CONFIG_SATA_SIL=y ...@@ -670,6 +677,7 @@ CONFIG_SATA_SIL=y
# CONFIG_SATA_ULI is not set # CONFIG_SATA_ULI is not set
CONFIG_SATA_VIA=y CONFIG_SATA_VIA=y
# CONFIG_SATA_VITESSE is not set # CONFIG_SATA_VITESSE is not set
# CONFIG_SATA_INIC162X is not set
CONFIG_SATA_INTEL_COMBINED=y CONFIG_SATA_INTEL_COMBINED=y
# CONFIG_PATA_ALI is not set # CONFIG_PATA_ALI is not set
# CONFIG_PATA_AMD is not set # CONFIG_PATA_AMD is not set
...@@ -687,6 +695,7 @@ CONFIG_SATA_INTEL_COMBINED=y ...@@ -687,6 +695,7 @@ CONFIG_SATA_INTEL_COMBINED=y
# CONFIG_PATA_HPT3X2N is not set # CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set # CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT821X is not set # CONFIG_PATA_IT821X is not set
# CONFIG_PATA_IT8213 is not set
# CONFIG_PATA_JMICRON is not set # CONFIG_PATA_JMICRON is not set
# CONFIG_PATA_TRIFLEX is not set # CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_MARVELL is not set # CONFIG_PATA_MARVELL is not set
...@@ -739,9 +748,7 @@ CONFIG_IEEE1394=y ...@@ -739,9 +748,7 @@ CONFIG_IEEE1394=y
# Subsystem Options # Subsystem Options
# #
# CONFIG_IEEE1394_VERBOSEDEBUG is not set # CONFIG_IEEE1394_VERBOSEDEBUG is not set
# CONFIG_IEEE1394_OUI_DB is not set
# CONFIG_IEEE1394_EXTRA_CONFIG_ROMS is not set # CONFIG_IEEE1394_EXTRA_CONFIG_ROMS is not set
# CONFIG_IEEE1394_EXPORT_FULL_API is not set
# #
# Device Drivers # Device Drivers
...@@ -766,6 +773,11 @@ CONFIG_IEEE1394_RAWIO=y ...@@ -766,6 +773,11 @@ CONFIG_IEEE1394_RAWIO=y
# #
# CONFIG_I2O is not set # CONFIG_I2O is not set
#
# Macintosh device drivers
#
# CONFIG_MAC_EMUMOUSEBTN is not set
# #
# Network device support # Network device support
# #
...@@ -833,6 +845,7 @@ CONFIG_8139TOO=y ...@@ -833,6 +845,7 @@ CONFIG_8139TOO=y
# CONFIG_SUNDANCE is not set # CONFIG_SUNDANCE is not set
# CONFIG_TLAN is not set # CONFIG_TLAN is not set
# CONFIG_VIA_RHINE is not set # CONFIG_VIA_RHINE is not set
# CONFIG_SC92031 is not set
# #
# Ethernet (1000 Mbit) # Ethernet (1000 Mbit)
...@@ -855,11 +868,13 @@ CONFIG_SKY2=y ...@@ -855,11 +868,13 @@ CONFIG_SKY2=y
CONFIG_TIGON3=y CONFIG_TIGON3=y
CONFIG_BNX2=y CONFIG_BNX2=y
# CONFIG_QLA3XXX is not set # CONFIG_QLA3XXX is not set
# CONFIG_ATL1 is not set
# #
# Ethernet (10000 Mbit) # Ethernet (10000 Mbit)
# #
# CONFIG_CHELSIO_T1 is not set # CONFIG_CHELSIO_T1 is not set
# CONFIG_CHELSIO_T3 is not set
# CONFIG_IXGB is not set # CONFIG_IXGB is not set
# CONFIG_S2IO is not set # CONFIG_S2IO is not set
# CONFIG_MYRI10GE is not set # CONFIG_MYRI10GE is not set
...@@ -1090,6 +1105,7 @@ CONFIG_SOUND=y ...@@ -1090,6 +1105,7 @@ CONFIG_SOUND=y
# Open Sound System # Open Sound System
# #
CONFIG_SOUND_PRIME=y CONFIG_SOUND_PRIME=y
CONFIG_OBSOLETE_OSS=y
# CONFIG_SOUND_BT878 is not set # CONFIG_SOUND_BT878 is not set
# CONFIG_SOUND_ES1371 is not set # CONFIG_SOUND_ES1371 is not set
CONFIG_SOUND_ICH=y CONFIG_SOUND_ICH=y
...@@ -1103,6 +1119,7 @@ CONFIG_SOUND_ICH=y ...@@ -1103,6 +1119,7 @@ CONFIG_SOUND_ICH=y
# HID Devices # HID Devices
# #
CONFIG_HID=y CONFIG_HID=y
# CONFIG_HID_DEBUG is not set
# #
# USB support # USB support
...@@ -1117,10 +1134,8 @@ CONFIG_USB=y ...@@ -1117,10 +1134,8 @@ CONFIG_USB=y
# Miscellaneous USB options # Miscellaneous USB options
# #
CONFIG_USB_DEVICEFS=y CONFIG_USB_DEVICEFS=y
# CONFIG_USB_BANDWIDTH is not set
# CONFIG_USB_DYNAMIC_MINORS is not set # CONFIG_USB_DYNAMIC_MINORS is not set
# CONFIG_USB_SUSPEND is not set # CONFIG_USB_SUSPEND is not set
# CONFIG_USB_MULTITHREAD_PROBE is not set
# CONFIG_USB_OTG is not set # CONFIG_USB_OTG is not set
# #
...@@ -1130,9 +1145,11 @@ CONFIG_USB_EHCI_HCD=y ...@@ -1130,9 +1145,11 @@ CONFIG_USB_EHCI_HCD=y
# CONFIG_USB_EHCI_SPLIT_ISO is not set # CONFIG_USB_EHCI_SPLIT_ISO is not set
# CONFIG_USB_EHCI_ROOT_HUB_TT is not set # CONFIG_USB_EHCI_ROOT_HUB_TT is not set
# CONFIG_USB_EHCI_TT_NEWSCHED is not set # CONFIG_USB_EHCI_TT_NEWSCHED is not set
# CONFIG_USB_EHCI_BIG_ENDIAN_MMIO is not set
# CONFIG_USB_ISP116X_HCD is not set # CONFIG_USB_ISP116X_HCD is not set
CONFIG_USB_OHCI_HCD=y CONFIG_USB_OHCI_HCD=y
# CONFIG_USB_OHCI_BIG_ENDIAN is not set # CONFIG_USB_OHCI_BIG_ENDIAN_DESC is not set
# CONFIG_USB_OHCI_BIG_ENDIAN_MMIO is not set
CONFIG_USB_OHCI_LITTLE_ENDIAN=y CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_UHCI_HCD=y CONFIG_USB_UHCI_HCD=y
# CONFIG_USB_SL811_HCD is not set # CONFIG_USB_SL811_HCD is not set
...@@ -1183,6 +1200,7 @@ CONFIG_USB_HID=y ...@@ -1183,6 +1200,7 @@ CONFIG_USB_HID=y
# CONFIG_USB_ATI_REMOTE2 is not set # CONFIG_USB_ATI_REMOTE2 is not set
# CONFIG_USB_KEYSPAN_REMOTE is not set # CONFIG_USB_KEYSPAN_REMOTE is not set
# CONFIG_USB_APPLETOUCH is not set # CONFIG_USB_APPLETOUCH is not set
# CONFIG_USB_GTCO is not set
# #
# USB Imaging devices # USB Imaging devices
...@@ -1287,6 +1305,10 @@ CONFIG_USB_MON=y ...@@ -1287,6 +1305,10 @@ CONFIG_USB_MON=y
# DMA Devices # DMA Devices
# #
#
# Auxiliary Display support
#
# #
# Virtualization # Virtualization
# #
...@@ -1480,6 +1502,7 @@ CONFIG_UNUSED_SYMBOLS=y ...@@ -1480,6 +1502,7 @@ CONFIG_UNUSED_SYMBOLS=y
# CONFIG_DEBUG_FS is not set # CONFIG_DEBUG_FS is not set
# CONFIG_HEADERS_CHECK is not set # CONFIG_HEADERS_CHECK is not set
CONFIG_DEBUG_KERNEL=y CONFIG_DEBUG_KERNEL=y
# CONFIG_DEBUG_SHIRQ is not set
CONFIG_LOG_BUF_SHIFT=18 CONFIG_LOG_BUF_SHIFT=18
CONFIG_DETECT_SOFTLOCKUP=y CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_SCHEDSTATS is not set # CONFIG_SCHEDSTATS is not set
...@@ -1488,7 +1511,6 @@ CONFIG_DETECT_SOFTLOCKUP=y ...@@ -1488,7 +1511,6 @@ CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_RT_MUTEX_TESTER is not set # CONFIG_RT_MUTEX_TESTER is not set
# CONFIG_DEBUG_SPINLOCK is not set # CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_MUTEXES is not set # CONFIG_DEBUG_MUTEXES is not set
# CONFIG_DEBUG_RWSEMS is not set
# CONFIG_DEBUG_LOCK_ALLOC is not set # CONFIG_DEBUG_LOCK_ALLOC is not set
# CONFIG_PROVE_LOCKING is not set # CONFIG_PROVE_LOCKING is not set
# CONFIG_DEBUG_SPINLOCK_SLEEP is not set # CONFIG_DEBUG_SPINLOCK_SLEEP is not set
...@@ -1533,7 +1555,8 @@ CONFIG_CRC32=y ...@@ -1533,7 +1555,8 @@ CONFIG_CRC32=y
# CONFIG_LIBCRC32C is not set # CONFIG_LIBCRC32C is not set
CONFIG_ZLIB_INFLATE=y CONFIG_ZLIB_INFLATE=y
CONFIG_PLIST=y CONFIG_PLIST=y
CONFIG_IOMAP_COPY=y CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_GENERIC_HARDIRQS=y CONFIG_GENERIC_HARDIRQS=y
CONFIG_GENERIC_IRQ_PROBE=y CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_PENDING_IRQ=y CONFIG_GENERIC_PENDING_IRQ=y
......
...@@ -40,8 +40,9 @@ obj-$(CONFIG_EARLY_PRINTK) += early_printk.o ...@@ -40,8 +40,9 @@ obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
obj-$(CONFIG_HPET_TIMER) += hpet.o obj-$(CONFIG_HPET_TIMER) += hpet.o
obj-$(CONFIG_K8_NB) += k8.o obj-$(CONFIG_K8_NB) += k8.o
# Make sure this is linked after any other paravirt_ops structs: see head.S obj-$(CONFIG_VMI) += vmi.o vmitime.o
obj-$(CONFIG_PARAVIRT) += paravirt.o obj-$(CONFIG_PARAVIRT) += paravirt.o
obj-y += pcspeaker.o
EXTRA_AFLAGS := -traditional EXTRA_AFLAGS := -traditional
......
...@@ -36,6 +36,7 @@ ...@@ -36,6 +36,7 @@
#include <asm/hpet.h> #include <asm/hpet.h>
#include <asm/i8253.h> #include <asm/i8253.h>
#include <asm/nmi.h> #include <asm/nmi.h>
#include <asm/idle.h>
#include <mach_apic.h> #include <mach_apic.h>
#include <mach_apicdef.h> #include <mach_apicdef.h>
...@@ -1255,6 +1256,7 @@ fastcall void smp_apic_timer_interrupt(struct pt_regs *regs) ...@@ -1255,6 +1256,7 @@ fastcall void smp_apic_timer_interrupt(struct pt_regs *regs)
* Besides, if we don't timer interrupts ignore the global * Besides, if we don't timer interrupts ignore the global
* interrupt lock, which is the WrongThing (tm) to do. * interrupt lock, which is the WrongThing (tm) to do.
*/ */
exit_idle();
irq_enter(); irq_enter();
smp_local_timer_interrupt(); smp_local_timer_interrupt();
irq_exit(); irq_exit();
...@@ -1305,6 +1307,7 @@ fastcall void smp_spurious_interrupt(struct pt_regs *regs) ...@@ -1305,6 +1307,7 @@ fastcall void smp_spurious_interrupt(struct pt_regs *regs)
{ {
unsigned long v; unsigned long v;
exit_idle();
irq_enter(); irq_enter();
/* /*
* Check if this really is a spurious interrupt and ACK it * Check if this really is a spurious interrupt and ACK it
...@@ -1329,6 +1332,7 @@ fastcall void smp_error_interrupt(struct pt_regs *regs) ...@@ -1329,6 +1332,7 @@ fastcall void smp_error_interrupt(struct pt_regs *regs)
{ {
unsigned long v, v1; unsigned long v, v1;
exit_idle();
irq_enter(); irq_enter();
/* First tickle the hardware, only then report what went on. -- REW */ /* First tickle the hardware, only then report what went on. -- REW */
v = apic_read(APIC_ESR); v = apic_read(APIC_ESR);
...@@ -1395,7 +1399,7 @@ int __init APIC_init_uniprocessor (void) ...@@ -1395,7 +1399,7 @@ int __init APIC_init_uniprocessor (void)
if (!skip_ioapic_setup && nr_ioapics) if (!skip_ioapic_setup && nr_ioapics)
setup_IO_APIC(); setup_IO_APIC();
#endif #endif
setup_boot_APIC_clock(); setup_boot_clock();
return 0; return 0;
} }
......
...@@ -211,6 +211,7 @@ ...@@ -211,6 +211,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/stat.h> #include <linux/stat.h>
#include <linux/proc_fs.h> #include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include <linux/miscdevice.h> #include <linux/miscdevice.h>
#include <linux/apm_bios.h> #include <linux/apm_bios.h>
#include <linux/init.h> #include <linux/init.h>
...@@ -1636,9 +1637,8 @@ static int do_open(struct inode * inode, struct file * filp) ...@@ -1636,9 +1637,8 @@ static int do_open(struct inode * inode, struct file * filp)
return 0; return 0;
} }
static int apm_get_info(char *buf, char **start, off_t fpos, int length) static int proc_apm_show(struct seq_file *m, void *v)
{ {
char * p;
unsigned short bx; unsigned short bx;
unsigned short cx; unsigned short cx;
unsigned short dx; unsigned short dx;
...@@ -1650,8 +1650,6 @@ static int apm_get_info(char *buf, char **start, off_t fpos, int length) ...@@ -1650,8 +1650,6 @@ static int apm_get_info(char *buf, char **start, off_t fpos, int length)
int time_units = -1; int time_units = -1;
char *units = "?"; char *units = "?";
p = buf;
if ((num_online_cpus() == 1) && if ((num_online_cpus() == 1) &&
!(error = apm_get_power_status(&bx, &cx, &dx))) { !(error = apm_get_power_status(&bx, &cx, &dx))) {
ac_line_status = (bx >> 8) & 0xff; ac_line_status = (bx >> 8) & 0xff;
...@@ -1705,7 +1703,7 @@ static int apm_get_info(char *buf, char **start, off_t fpos, int length) ...@@ -1705,7 +1703,7 @@ static int apm_get_info(char *buf, char **start, off_t fpos, int length)
-1: Unknown -1: Unknown
8) min = minutes; sec = seconds */ 8) min = minutes; sec = seconds */
p += sprintf(p, "%s %d.%d 0x%02x 0x%02x 0x%02x 0x%02x %d%% %d %s\n", seq_printf(m, "%s %d.%d 0x%02x 0x%02x 0x%02x 0x%02x %d%% %d %s\n",
driver_version, driver_version,
(apm_info.bios.version >> 8) & 0xff, (apm_info.bios.version >> 8) & 0xff,
apm_info.bios.version & 0xff, apm_info.bios.version & 0xff,
...@@ -1716,10 +1714,22 @@ static int apm_get_info(char *buf, char **start, off_t fpos, int length) ...@@ -1716,10 +1714,22 @@ static int apm_get_info(char *buf, char **start, off_t fpos, int length)
percentage, percentage,
time_units, time_units,
units); units);
return 0;
}
return p - buf; static int proc_apm_open(struct inode *inode, struct file *file)
{
return single_open(file, proc_apm_show, NULL);
} }
static const struct file_operations apm_file_ops = {
.owner = THIS_MODULE,
.open = proc_apm_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int apm(void *unused) static int apm(void *unused)
{ {
unsigned short bx; unsigned short bx;
...@@ -2341,9 +2351,9 @@ static int __init apm_init(void) ...@@ -2341,9 +2351,9 @@ static int __init apm_init(void)
set_base(gdt[APM_DS >> 3], set_base(gdt[APM_DS >> 3],
__va((unsigned long)apm_info.bios.dseg << 4)); __va((unsigned long)apm_info.bios.dseg << 4));
apm_proc = create_proc_info_entry("apm", 0, NULL, apm_get_info); apm_proc = create_proc_entry("apm", 0, NULL);
if (apm_proc) if (apm_proc)
apm_proc->owner = THIS_MODULE; apm_proc->proc_fops = &apm_file_ops;
kapmd_task = kthread_create(apm, NULL, "kapmd"); kapmd_task = kthread_create(apm, NULL, "kapmd");
if (IS_ERR(kapmd_task)) { if (IS_ERR(kapmd_task)) {
......
...@@ -72,7 +72,7 @@ void foo(void) ...@@ -72,7 +72,7 @@ void foo(void)
OFFSET(PT_EAX, pt_regs, eax); OFFSET(PT_EAX, pt_regs, eax);
OFFSET(PT_DS, pt_regs, xds); OFFSET(PT_DS, pt_regs, xds);
OFFSET(PT_ES, pt_regs, xes); OFFSET(PT_ES, pt_regs, xes);
OFFSET(PT_GS, pt_regs, xgs); OFFSET(PT_FS, pt_regs, xfs);
OFFSET(PT_ORIG_EAX, pt_regs, orig_eax); OFFSET(PT_ORIG_EAX, pt_regs, orig_eax);
OFFSET(PT_EIP, pt_regs, eip); OFFSET(PT_EIP, pt_regs, eip);
OFFSET(PT_CS, pt_regs, xcs); OFFSET(PT_CS, pt_regs, xcs);
......
...@@ -605,7 +605,7 @@ void __init early_cpu_init(void) ...@@ -605,7 +605,7 @@ void __init early_cpu_init(void)
struct pt_regs * __devinit idle_regs(struct pt_regs *regs) struct pt_regs * __devinit idle_regs(struct pt_regs *regs)
{ {
memset(regs, 0, sizeof(struct pt_regs)); memset(regs, 0, sizeof(struct pt_regs));
regs->xgs = __KERNEL_PDA; regs->xfs = __KERNEL_PDA;
return regs; return regs;
} }
...@@ -662,12 +662,12 @@ struct i386_pda boot_pda = { ...@@ -662,12 +662,12 @@ struct i386_pda boot_pda = {
.pcurrent = &init_task, .pcurrent = &init_task,
}; };
static inline void set_kernel_gs(void) static inline void set_kernel_fs(void)
{ {
/* Set %gs for this CPU's PDA. Memory clobber is to create a /* Set %fs for this CPU's PDA. Memory clobber is to create a
barrier with respect to any PDA operations, so the compiler barrier with respect to any PDA operations, so the compiler
doesn't move any before here. */ doesn't move any before here. */
asm volatile ("mov %0, %%gs" : : "r" (__KERNEL_PDA) : "memory"); asm volatile ("mov %0, %%fs" : : "r" (__KERNEL_PDA) : "memory");
} }
/* Initialize the CPU's GDT and PDA. The boot CPU does this for /* Initialize the CPU's GDT and PDA. The boot CPU does this for
...@@ -718,7 +718,7 @@ void __cpuinit cpu_set_gdt(int cpu) ...@@ -718,7 +718,7 @@ void __cpuinit cpu_set_gdt(int cpu)
the boot CPU, this will transition from the boot gdt+pda to the boot CPU, this will transition from the boot gdt+pda to
the real ones). */ the real ones). */
load_gdt(cpu_gdt_descr); load_gdt(cpu_gdt_descr);
set_kernel_gs(); set_kernel_fs();
} }
/* Common CPU init for both boot and secondary CPUs */ /* Common CPU init for both boot and secondary CPUs */
...@@ -764,8 +764,8 @@ static void __cpuinit _cpu_init(int cpu, struct task_struct *curr) ...@@ -764,8 +764,8 @@ static void __cpuinit _cpu_init(int cpu, struct task_struct *curr)
__set_tss_desc(cpu, GDT_ENTRY_DOUBLEFAULT_TSS, &doublefault_tss); __set_tss_desc(cpu, GDT_ENTRY_DOUBLEFAULT_TSS, &doublefault_tss);
#endif #endif
/* Clear %fs. */ /* Clear %gs. */
asm volatile ("mov %0, %%fs" : : "r" (0)); asm volatile ("mov %0, %%gs" : : "r" (0));
/* Clear all 6 debug registers: */ /* Clear all 6 debug registers: */
set_debugreg(0, 0); set_debugreg(0, 0);
......
...@@ -6,6 +6,7 @@ ...@@ -6,6 +6,7 @@
#include <asm/io.h> #include <asm/io.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/timer.h> #include <asm/timer.h>
#include <asm/pci-direct.h>
#include "cpu.h" #include "cpu.h"
...@@ -161,19 +162,19 @@ static void __cpuinit set_cx86_inc(void) ...@@ -161,19 +162,19 @@ static void __cpuinit set_cx86_inc(void)
static void __cpuinit geode_configure(void) static void __cpuinit geode_configure(void)
{ {
unsigned long flags; unsigned long flags;
u8 ccr3, ccr4; u8 ccr3;
local_irq_save(flags); local_irq_save(flags);
/* Suspend on halt power saving and enable #SUSP pin */ /* Suspend on halt power saving and enable #SUSP pin */
setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x88); setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x88);
ccr3 = getCx86(CX86_CCR3); ccr3 = getCx86(CX86_CCR3);
setCx86(CX86_CCR3, (ccr3 & 0x0f) | 0x10); /* Enable */ setCx86(CX86_CCR3, (ccr3 & 0x0f) | 0x10); /* enable MAPEN */
ccr4 = getCx86(CX86_CCR4);
ccr4 |= 0x38; /* FPU fast, DTE cache, Mem bypass */
setCx86(CX86_CCR3, ccr3);
/* FPU fast, DTE cache, Mem bypass */
setCx86(CX86_CCR4, getCx86(CX86_CCR4) | 0x38);
setCx86(CX86_CCR3, ccr3); /* disable MAPEN */
set_cx86_memwb(); set_cx86_memwb();
set_cx86_reorder(); set_cx86_reorder();
...@@ -183,14 +184,6 @@ static void __cpuinit geode_configure(void) ...@@ -183,14 +184,6 @@ static void __cpuinit geode_configure(void)
} }
#ifdef CONFIG_PCI
static struct pci_device_id __cpuinitdata cyrix_55x0[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_5510) },
{ PCI_DEVICE(PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_5520) },
{ },
};
#endif
static void __cpuinit init_cyrix(struct cpuinfo_x86 *c) static void __cpuinit init_cyrix(struct cpuinfo_x86 *c)
{ {
unsigned char dir0, dir0_msn, dir0_lsn, dir1 = 0; unsigned char dir0, dir0_msn, dir0_lsn, dir1 = 0;
...@@ -258,6 +251,8 @@ static void __cpuinit init_cyrix(struct cpuinfo_x86 *c) ...@@ -258,6 +251,8 @@ static void __cpuinit init_cyrix(struct cpuinfo_x86 *c)
case 4: /* MediaGX/GXm or Geode GXM/GXLV/GX1 */ case 4: /* MediaGX/GXm or Geode GXM/GXLV/GX1 */
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
{
u32 vendor, device;
/* It isn't really a PCI quirk directly, but the cure is the /* It isn't really a PCI quirk directly, but the cure is the
same. The MediaGX has deep magic SMM stuff that handles the same. The MediaGX has deep magic SMM stuff that handles the
SB emulation. It thows away the fifo on disable_dma() which SB emulation. It thows away the fifo on disable_dma() which
...@@ -273,22 +268,34 @@ static void __cpuinit init_cyrix(struct cpuinfo_x86 *c) ...@@ -273,22 +268,34 @@ static void __cpuinit init_cyrix(struct cpuinfo_x86 *c)
printk(KERN_INFO "Working around Cyrix MediaGX virtual DMA bugs.\n"); printk(KERN_INFO "Working around Cyrix MediaGX virtual DMA bugs.\n");
isa_dma_bridge_buggy = 2; isa_dma_bridge_buggy = 2;
/* We do this before the PCI layer is running. However we
are safe here as we know the bridge must be a Cyrix
companion and must be present */
vendor = read_pci_config_16(0, 0, 0x12, PCI_VENDOR_ID);
device = read_pci_config_16(0, 0, 0x12, PCI_DEVICE_ID);
/* /*
* The 5510/5520 companion chips have a funky PIT. * The 5510/5520 companion chips have a funky PIT.
*/ */
if (pci_dev_present(cyrix_55x0)) if (vendor == PCI_VENDOR_ID_CYRIX &&
(device == PCI_DEVICE_ID_CYRIX_5510 || device == PCI_DEVICE_ID_CYRIX_5520))
pit_latch_buggy = 1; pit_latch_buggy = 1;
}
#endif #endif
c->x86_cache_size=16; /* Yep 16K integrated cache thats it */ c->x86_cache_size=16; /* Yep 16K integrated cache thats it */
/* GXm supports extended cpuid levels 'ala' AMD */ /* GXm supports extended cpuid levels 'ala' AMD */
if (c->cpuid_level == 2) { if (c->cpuid_level == 2) {
/* Enable cxMMX extensions (GX1 Datasheet 54) */ /* Enable cxMMX extensions (GX1 Datasheet 54) */
setCx86(CX86_CCR7, getCx86(CX86_CCR7)|1); setCx86(CX86_CCR7, getCx86(CX86_CCR7) | 1);
/* GXlv/GXm/GX1 */ /*
if((dir1 >= 0x50 && dir1 <= 0x54) || dir1 >= 0x63) * GXm : 0x30 ... 0x5f GXm datasheet 51
* GXlv: 0x6x GXlv datasheet 54
* ? : 0x7x
* GX1 : 0x8x GX1 datasheet 56
*/
if((0x30 <= dir1 && dir1 <= 0x6f) || (0x80 <=dir1 && dir1 <= 0x8f))
geode_configure(); geode_configure();
get_model_name(c); /* get CPU marketing name */ get_model_name(c); /* get CPU marketing name */
return; return;
...@@ -415,15 +422,14 @@ static void __cpuinit cyrix_identify(struct cpuinfo_x86 * c) ...@@ -415,15 +422,14 @@ static void __cpuinit cyrix_identify(struct cpuinfo_x86 * c)
if (dir0 == 5 || dir0 == 3) if (dir0 == 5 || dir0 == 3)
{ {
unsigned char ccr3, ccr4; unsigned char ccr3;
unsigned long flags; unsigned long flags;
printk(KERN_INFO "Enabling CPUID on Cyrix processor.\n"); printk(KERN_INFO "Enabling CPUID on Cyrix processor.\n");
local_irq_save(flags); local_irq_save(flags);
ccr3 = getCx86(CX86_CCR3); ccr3 = getCx86(CX86_CCR3);
setCx86(CX86_CCR3, (ccr3 & 0x0f) | 0x10); /* enable MAPEN */ setCx86(CX86_CCR3, (ccr3 & 0x0f) | 0x10); /* enable MAPEN */
ccr4 = getCx86(CX86_CCR4); setCx86(CX86_CCR4, getCx86(CX86_CCR4) | 0x80); /* enable cpuid */
setCx86(CX86_CCR4, ccr4 | 0x80); /* enable cpuid */ setCx86(CX86_CCR3, ccr3); /* disable MAPEN */
setCx86(CX86_CCR3, ccr3); /* disable MAPEN */
local_irq_restore(flags); local_irq_restore(flags);
} }
} }
......
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/mce.h>
#include "mce.h" #include "mce.h"
......
#include <linux/init.h> #include <linux/init.h>
#include <asm/mce.h>
void amd_mcheck_init(struct cpuinfo_x86 *c); void amd_mcheck_init(struct cpuinfo_x86 *c);
void intel_p4_mcheck_init(struct cpuinfo_x86 *c); void intel_p4_mcheck_init(struct cpuinfo_x86 *c);
...@@ -9,6 +10,5 @@ void winchip_mcheck_init(struct cpuinfo_x86 *c); ...@@ -9,6 +10,5 @@ void winchip_mcheck_init(struct cpuinfo_x86 *c);
/* Call the installed machine check handler for this CPU setup. */ /* Call the installed machine check handler for this CPU setup. */
extern fastcall void (*machine_check_vector)(struct pt_regs *, long error_code); extern fastcall void (*machine_check_vector)(struct pt_regs *, long error_code);
extern int mce_disabled;
extern int nr_mce_banks; extern int nr_mce_banks;
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include <asm/system.h> #include <asm/system.h>
#include <asm/msr.h> #include <asm/msr.h>
#include <asm/apic.h> #include <asm/apic.h>
#include <asm/idle.h>
#include <asm/therm_throt.h> #include <asm/therm_throt.h>
...@@ -59,6 +60,7 @@ static void (*vendor_thermal_interrupt)(struct pt_regs *regs) = unexpected_therm ...@@ -59,6 +60,7 @@ static void (*vendor_thermal_interrupt)(struct pt_regs *regs) = unexpected_therm
fastcall void smp_thermal_interrupt(struct pt_regs *regs) fastcall void smp_thermal_interrupt(struct pt_regs *regs)
{ {
exit_idle();
irq_enter(); irq_enter();
vendor_thermal_interrupt(regs); vendor_thermal_interrupt(regs);
irq_exit(); irq_exit();
......
...@@ -211,6 +211,9 @@ mtrr_ioctl(struct file *file, unsigned int cmd, unsigned long __arg) ...@@ -211,6 +211,9 @@ mtrr_ioctl(struct file *file, unsigned int cmd, unsigned long __arg)
default: default:
return -ENOTTY; return -ENOTTY;
case MTRRIOC_ADD_ENTRY: case MTRRIOC_ADD_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_ADD_ENTRY:
#endif
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN))
return -EPERM; return -EPERM;
err = err =
...@@ -218,21 +221,33 @@ mtrr_ioctl(struct file *file, unsigned int cmd, unsigned long __arg) ...@@ -218,21 +221,33 @@ mtrr_ioctl(struct file *file, unsigned int cmd, unsigned long __arg)
file, 0); file, 0);
break; break;
case MTRRIOC_SET_ENTRY: case MTRRIOC_SET_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_SET_ENTRY:
#endif
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN))
return -EPERM; return -EPERM;
err = mtrr_add(sentry.base, sentry.size, sentry.type, 0); err = mtrr_add(sentry.base, sentry.size, sentry.type, 0);
break; break;
case MTRRIOC_DEL_ENTRY: case MTRRIOC_DEL_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_DEL_ENTRY:
#endif
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN))
return -EPERM; return -EPERM;
err = mtrr_file_del(sentry.base, sentry.size, file, 0); err = mtrr_file_del(sentry.base, sentry.size, file, 0);
break; break;
case MTRRIOC_KILL_ENTRY: case MTRRIOC_KILL_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_KILL_ENTRY:
#endif
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN))
return -EPERM; return -EPERM;
err = mtrr_del(-1, sentry.base, sentry.size); err = mtrr_del(-1, sentry.base, sentry.size);
break; break;
case MTRRIOC_GET_ENTRY: case MTRRIOC_GET_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_GET_ENTRY:
#endif
if (gentry.regnum >= num_var_ranges) if (gentry.regnum >= num_var_ranges)
return -EINVAL; return -EINVAL;
mtrr_if->get(gentry.regnum, &gentry.base, &size, &type); mtrr_if->get(gentry.regnum, &gentry.base, &size, &type);
...@@ -249,6 +264,9 @@ mtrr_ioctl(struct file *file, unsigned int cmd, unsigned long __arg) ...@@ -249,6 +264,9 @@ mtrr_ioctl(struct file *file, unsigned int cmd, unsigned long __arg)
break; break;
case MTRRIOC_ADD_PAGE_ENTRY: case MTRRIOC_ADD_PAGE_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_ADD_PAGE_ENTRY:
#endif
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN))
return -EPERM; return -EPERM;
err = err =
...@@ -256,21 +274,33 @@ mtrr_ioctl(struct file *file, unsigned int cmd, unsigned long __arg) ...@@ -256,21 +274,33 @@ mtrr_ioctl(struct file *file, unsigned int cmd, unsigned long __arg)
file, 1); file, 1);
break; break;
case MTRRIOC_SET_PAGE_ENTRY: case MTRRIOC_SET_PAGE_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_SET_PAGE_ENTRY:
#endif
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN))
return -EPERM; return -EPERM;
err = mtrr_add_page(sentry.base, sentry.size, sentry.type, 0); err = mtrr_add_page(sentry.base, sentry.size, sentry.type, 0);
break; break;
case MTRRIOC_DEL_PAGE_ENTRY: case MTRRIOC_DEL_PAGE_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_DEL_PAGE_ENTRY:
#endif
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN))
return -EPERM; return -EPERM;
err = mtrr_file_del(sentry.base, sentry.size, file, 1); err = mtrr_file_del(sentry.base, sentry.size, file, 1);
break; break;
case MTRRIOC_KILL_PAGE_ENTRY: case MTRRIOC_KILL_PAGE_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_KILL_PAGE_ENTRY:
#endif
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN))
return -EPERM; return -EPERM;
err = mtrr_del_page(-1, sentry.base, sentry.size); err = mtrr_del_page(-1, sentry.base, sentry.size);
break; break;
case MTRRIOC_GET_PAGE_ENTRY: case MTRRIOC_GET_PAGE_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_GET_PAGE_ENTRY:
#endif
if (gentry.regnum >= num_var_ranges) if (gentry.regnum >= num_var_ranges)
return -EINVAL; return -EINVAL;
mtrr_if->get(gentry.regnum, &gentry.base, &size, &type); mtrr_if->get(gentry.regnum, &gentry.base, &size, &type);
......
...@@ -50,7 +50,7 @@ u32 num_var_ranges = 0; ...@@ -50,7 +50,7 @@ u32 num_var_ranges = 0;
unsigned int *usage_table; unsigned int *usage_table;
static DEFINE_MUTEX(mtrr_mutex); static DEFINE_MUTEX(mtrr_mutex);
u32 size_or_mask, size_and_mask; u64 size_or_mask, size_and_mask;
static struct mtrr_ops * mtrr_ops[X86_VENDOR_NUM] = {}; static struct mtrr_ops * mtrr_ops[X86_VENDOR_NUM] = {};
...@@ -662,8 +662,8 @@ void __init mtrr_bp_init(void) ...@@ -662,8 +662,8 @@ void __init mtrr_bp_init(void)
boot_cpu_data.x86_mask == 0x4)) boot_cpu_data.x86_mask == 0x4))
phys_addr = 36; phys_addr = 36;
size_or_mask = ~((1 << (phys_addr - PAGE_SHIFT)) - 1); size_or_mask = ~((1ULL << (phys_addr - PAGE_SHIFT)) - 1);
size_and_mask = ~size_or_mask & 0xfff00000; size_and_mask = ~size_or_mask & 0xfffff00000ULL;
} else if (boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR && } else if (boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR &&
boot_cpu_data.x86 == 6) { boot_cpu_data.x86 == 6) {
/* VIA C* family have Intel style MTRRs, but /* VIA C* family have Intel style MTRRs, but
......
...@@ -84,7 +84,7 @@ void get_mtrr_state(void); ...@@ -84,7 +84,7 @@ void get_mtrr_state(void);
extern void set_mtrr_ops(struct mtrr_ops * ops); extern void set_mtrr_ops(struct mtrr_ops * ops);
extern u32 size_or_mask, size_and_mask; extern u64 size_or_mask, size_and_mask;
extern struct mtrr_ops * mtrr_if; extern struct mtrr_ops * mtrr_if;
#define is_cpu(vnd) (mtrr_if && mtrr_if->vendor == X86_VENDOR_##vnd) #define is_cpu(vnd) (mtrr_if && mtrr_if->vendor == X86_VENDOR_##vnd)
......
...@@ -29,7 +29,7 @@ static int show_cpuinfo(struct seq_file *m, void *v) ...@@ -29,7 +29,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, "syscall", NULL, NULL, NULL, NULL, NULL, NULL, NULL, "syscall", NULL, NULL, NULL, NULL,
NULL, NULL, NULL, "mp", "nx", NULL, "mmxext", NULL, NULL, NULL, NULL, "mp", "nx", NULL, "mmxext", NULL,
NULL, "fxsr_opt", "rdtscp", NULL, NULL, "lm", "3dnowext", "3dnow", NULL, "fxsr_opt", "pdpe1gb", "rdtscp", NULL, "lm", "3dnowext", "3dnow",
/* Transmeta-defined */ /* Transmeta-defined */
"recovery", "longrun", NULL, "lrti", NULL, NULL, NULL, NULL, "recovery", "longrun", NULL, "lrti", NULL, NULL, NULL, NULL,
...@@ -47,7 +47,7 @@ static int show_cpuinfo(struct seq_file *m, void *v) ...@@ -47,7 +47,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
/* Intel-defined (#2) */ /* Intel-defined (#2) */
"pni", NULL, NULL, "monitor", "ds_cpl", "vmx", "smx", "est", "pni", NULL, NULL, "monitor", "ds_cpl", "vmx", "smx", "est",
"tm2", "ssse3", "cid", NULL, NULL, "cx16", "xtpr", NULL, "tm2", "ssse3", "cid", NULL, NULL, "cx16", "xtpr", NULL,
NULL, NULL, "dca", NULL, NULL, NULL, NULL, NULL, NULL, NULL, "dca", NULL, NULL, NULL, NULL, "popcnt",
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
/* VIA/Cyrix/Centaur-defined */ /* VIA/Cyrix/Centaur-defined */
...@@ -57,8 +57,9 @@ static int show_cpuinfo(struct seq_file *m, void *v) ...@@ -57,8 +57,9 @@ static int show_cpuinfo(struct seq_file *m, void *v)
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
/* AMD-defined (#2) */ /* AMD-defined (#2) */
"lahf_lm", "cmp_legacy", "svm", NULL, "cr8legacy", NULL, NULL, NULL, "lahf_lm", "cmp_legacy", "svm", "extapic", "cr8legacy", "abm",
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, "sse4a", "misalignsse",
"3dnowprefetch", "osvw", "ibs", NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
}; };
...@@ -69,8 +70,11 @@ static int show_cpuinfo(struct seq_file *m, void *v) ...@@ -69,8 +70,11 @@ static int show_cpuinfo(struct seq_file *m, void *v)
"ttp", /* thermal trip */ "ttp", /* thermal trip */
"tm", "tm",
"stc", "stc",
"100mhzsteps",
"hwpstate",
NULL, NULL,
/* nothing */ /* constant_tsc - moved to flags */ NULL, /* constant_tsc - moved to flags */
/* nothing */
}; };
struct cpuinfo_x86 *c = v; struct cpuinfo_x86 *c = v;
int i, n = c - cpu_data; int i, n = c - cpu_data;
......
...@@ -9,7 +9,7 @@ static void __cpuinit init_transmeta(struct cpuinfo_x86 *c) ...@@ -9,7 +9,7 @@ static void __cpuinit init_transmeta(struct cpuinfo_x86 *c)
{ {
unsigned int cap_mask, uk, max, dummy; unsigned int cap_mask, uk, max, dummy;
unsigned int cms_rev1, cms_rev2; unsigned int cms_rev1, cms_rev2;
unsigned int cpu_rev, cpu_freq, cpu_flags, new_cpu_rev; unsigned int cpu_rev, cpu_freq = 0, cpu_flags, new_cpu_rev;
char cpu_info[65]; char cpu_info[65];
get_model_name(c); /* Same as AMD/Cyrix */ get_model_name(c); /* Same as AMD/Cyrix */
...@@ -72,6 +72,9 @@ static void __cpuinit init_transmeta(struct cpuinfo_x86 *c) ...@@ -72,6 +72,9 @@ static void __cpuinit init_transmeta(struct cpuinfo_x86 *c)
wrmsr(0x80860004, ~0, uk); wrmsr(0x80860004, ~0, uk);
c->x86_capability[0] = cpuid_edx(0x00000001); c->x86_capability[0] = cpuid_edx(0x00000001);
wrmsr(0x80860004, cap_mask, uk); wrmsr(0x80860004, cap_mask, uk);
/* All Transmeta CPUs have a constant TSC */
set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
/* If we can run i686 user-space code, call us an i686 */ /* If we can run i686 user-space code, call us an i686 */
#define USER686 (X86_FEATURE_TSC|X86_FEATURE_CX8|X86_FEATURE_CMOV) #define USER686 (X86_FEATURE_TSC|X86_FEATURE_CX8|X86_FEATURE_CMOV)
......
...@@ -48,7 +48,6 @@ static struct class *cpuid_class; ...@@ -48,7 +48,6 @@ static struct class *cpuid_class;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
struct cpuid_command { struct cpuid_command {
int cpu;
u32 reg; u32 reg;
u32 *data; u32 *data;
}; };
...@@ -57,8 +56,7 @@ static void cpuid_smp_cpuid(void *cmd_block) ...@@ -57,8 +56,7 @@ static void cpuid_smp_cpuid(void *cmd_block)
{ {
struct cpuid_command *cmd = (struct cpuid_command *)cmd_block; struct cpuid_command *cmd = (struct cpuid_command *)cmd_block;
if (cmd->cpu == smp_processor_id()) cpuid(cmd->reg, &cmd->data[0], &cmd->data[1], &cmd->data[2],
cpuid(cmd->reg, &cmd->data[0], &cmd->data[1], &cmd->data[2],
&cmd->data[3]); &cmd->data[3]);
} }
...@@ -70,11 +68,10 @@ static inline void do_cpuid(int cpu, u32 reg, u32 * data) ...@@ -70,11 +68,10 @@ static inline void do_cpuid(int cpu, u32 reg, u32 * data)
if (cpu == smp_processor_id()) { if (cpu == smp_processor_id()) {
cpuid(reg, &data[0], &data[1], &data[2], &data[3]); cpuid(reg, &data[0], &data[1], &data[2], &data[3]);
} else { } else {
cmd.cpu = cpu;
cmd.reg = reg; cmd.reg = reg;
cmd.data = data; cmd.data = data;
smp_call_function(cpuid_smp_cpuid, &cmd, 1, 1); smp_call_function_single(cpu, cpuid_smp_cpuid, &cmd, 1, 1);
} }
preempt_enable(); preempt_enable();
} }
......
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/e820.h> #include <asm/e820.h>
#include <asm/setup.h>
#ifdef CONFIG_EFI #ifdef CONFIG_EFI
int efi_enabled = 0; int efi_enabled = 0;
...@@ -156,21 +157,22 @@ static struct resource standard_io_resources[] = { { ...@@ -156,21 +157,22 @@ static struct resource standard_io_resources[] = { {
.flags = IORESOURCE_BUSY | IORESOURCE_IO .flags = IORESOURCE_BUSY | IORESOURCE_IO
} }; } };
static int romsignature(const unsigned char *x) #define ROMSIGNATURE 0xaa55
static int __init romsignature(const unsigned char *rom)
{ {
unsigned short sig; unsigned short sig;
int ret = 0;
if (probe_kernel_address((const unsigned short *)x, sig) == 0) return probe_kernel_address((const unsigned short *)rom, sig) == 0 &&
ret = (sig == 0xaa55); sig == ROMSIGNATURE;
return ret;
} }
static int __init romchecksum(unsigned char *rom, unsigned long length) static int __init romchecksum(unsigned char *rom, unsigned long length)
{ {
unsigned char *p, sum = 0; unsigned char sum;
for (p = rom; p < rom + length; p++) for (sum = 0; length; length--)
sum += *p; sum += *rom++;
return sum == 0; return sum == 0;
} }
......
...@@ -30,7 +30,7 @@ ...@@ -30,7 +30,7 @@
* 18(%esp) - %eax * 18(%esp) - %eax
* 1C(%esp) - %ds * 1C(%esp) - %ds
* 20(%esp) - %es * 20(%esp) - %es
* 24(%esp) - %gs * 24(%esp) - %fs
* 28(%esp) - orig_eax * 28(%esp) - orig_eax
* 2C(%esp) - %eip * 2C(%esp) - %eip
* 30(%esp) - %cs * 30(%esp) - %cs
...@@ -99,9 +99,9 @@ VM_MASK = 0x00020000 ...@@ -99,9 +99,9 @@ VM_MASK = 0x00020000
#define SAVE_ALL \ #define SAVE_ALL \
cld; \ cld; \
pushl %gs; \ pushl %fs; \
CFI_ADJUST_CFA_OFFSET 4;\ CFI_ADJUST_CFA_OFFSET 4;\
/*CFI_REL_OFFSET gs, 0;*/\ /*CFI_REL_OFFSET fs, 0;*/\
pushl %es; \ pushl %es; \
CFI_ADJUST_CFA_OFFSET 4;\ CFI_ADJUST_CFA_OFFSET 4;\
/*CFI_REL_OFFSET es, 0;*/\ /*CFI_REL_OFFSET es, 0;*/\
...@@ -133,7 +133,7 @@ VM_MASK = 0x00020000 ...@@ -133,7 +133,7 @@ VM_MASK = 0x00020000
movl %edx, %ds; \ movl %edx, %ds; \
movl %edx, %es; \ movl %edx, %es; \
movl $(__KERNEL_PDA), %edx; \ movl $(__KERNEL_PDA), %edx; \
movl %edx, %gs movl %edx, %fs
#define RESTORE_INT_REGS \ #define RESTORE_INT_REGS \
popl %ebx; \ popl %ebx; \
...@@ -166,9 +166,9 @@ VM_MASK = 0x00020000 ...@@ -166,9 +166,9 @@ VM_MASK = 0x00020000
2: popl %es; \ 2: popl %es; \
CFI_ADJUST_CFA_OFFSET -4;\ CFI_ADJUST_CFA_OFFSET -4;\
/*CFI_RESTORE es;*/\ /*CFI_RESTORE es;*/\
3: popl %gs; \ 3: popl %fs; \
CFI_ADJUST_CFA_OFFSET -4;\ CFI_ADJUST_CFA_OFFSET -4;\
/*CFI_RESTORE gs;*/\ /*CFI_RESTORE fs;*/\
.pushsection .fixup,"ax"; \ .pushsection .fixup,"ax"; \
4: movl $0,(%esp); \ 4: movl $0,(%esp); \
jmp 1b; \ jmp 1b; \
...@@ -227,6 +227,7 @@ ENTRY(ret_from_fork) ...@@ -227,6 +227,7 @@ ENTRY(ret_from_fork)
CFI_ADJUST_CFA_OFFSET -4 CFI_ADJUST_CFA_OFFSET -4
jmp syscall_exit jmp syscall_exit
CFI_ENDPROC CFI_ENDPROC
END(ret_from_fork)
/* /*
* Return to user mode is not as complex as all this looks, * Return to user mode is not as complex as all this looks,
...@@ -258,6 +259,7 @@ ENTRY(resume_userspace) ...@@ -258,6 +259,7 @@ ENTRY(resume_userspace)
# int/exception return? # int/exception return?
jne work_pending jne work_pending
jmp restore_all jmp restore_all
END(ret_from_exception)
#ifdef CONFIG_PREEMPT #ifdef CONFIG_PREEMPT
ENTRY(resume_kernel) ENTRY(resume_kernel)
...@@ -272,6 +274,7 @@ need_resched: ...@@ -272,6 +274,7 @@ need_resched:
jz restore_all jz restore_all
call preempt_schedule_irq call preempt_schedule_irq
jmp need_resched jmp need_resched
END(resume_kernel)
#endif #endif
CFI_ENDPROC CFI_ENDPROC
...@@ -349,16 +352,17 @@ sysenter_past_esp: ...@@ -349,16 +352,17 @@ sysenter_past_esp:
movl PT_OLDESP(%esp), %ecx movl PT_OLDESP(%esp), %ecx
xorl %ebp,%ebp xorl %ebp,%ebp
TRACE_IRQS_ON TRACE_IRQS_ON
1: mov PT_GS(%esp), %gs 1: mov PT_FS(%esp), %fs
ENABLE_INTERRUPTS_SYSEXIT ENABLE_INTERRUPTS_SYSEXIT
CFI_ENDPROC CFI_ENDPROC
.pushsection .fixup,"ax" .pushsection .fixup,"ax"
2: movl $0,PT_GS(%esp) 2: movl $0,PT_FS(%esp)
jmp 1b jmp 1b
.section __ex_table,"a" .section __ex_table,"a"
.align 4 .align 4
.long 1b,2b .long 1b,2b
.popsection .popsection
ENDPROC(sysenter_entry)
# system call handler stub # system call handler stub
ENTRY(system_call) ENTRY(system_call)
...@@ -459,6 +463,7 @@ ldt_ss: ...@@ -459,6 +463,7 @@ ldt_ss:
CFI_ADJUST_CFA_OFFSET -8 CFI_ADJUST_CFA_OFFSET -8
jmp restore_nocheck jmp restore_nocheck
CFI_ENDPROC CFI_ENDPROC
ENDPROC(system_call)
# perform work that needs to be done immediately before resumption # perform work that needs to be done immediately before resumption
ALIGN ALIGN
...@@ -504,6 +509,7 @@ work_notifysig_v86: ...@@ -504,6 +509,7 @@ work_notifysig_v86:
xorl %edx, %edx xorl %edx, %edx
call do_notify_resume call do_notify_resume
jmp resume_userspace_sig jmp resume_userspace_sig
END(work_pending)
# perform syscall exit tracing # perform syscall exit tracing
ALIGN ALIGN
...@@ -519,6 +525,7 @@ syscall_trace_entry: ...@@ -519,6 +525,7 @@ syscall_trace_entry:
cmpl $(nr_syscalls), %eax cmpl $(nr_syscalls), %eax
jnae syscall_call jnae syscall_call
jmp syscall_exit jmp syscall_exit
END(syscall_trace_entry)
# perform syscall exit tracing # perform syscall exit tracing
ALIGN ALIGN
...@@ -532,6 +539,7 @@ syscall_exit_work: ...@@ -532,6 +539,7 @@ syscall_exit_work:
movl $1, %edx movl $1, %edx
call do_syscall_trace call do_syscall_trace
jmp resume_userspace jmp resume_userspace
END(syscall_exit_work)
CFI_ENDPROC CFI_ENDPROC
RING0_INT_FRAME # can't unwind into user space anyway RING0_INT_FRAME # can't unwind into user space anyway
...@@ -542,15 +550,17 @@ syscall_fault: ...@@ -542,15 +550,17 @@ syscall_fault:
GET_THREAD_INFO(%ebp) GET_THREAD_INFO(%ebp)
movl $-EFAULT,PT_EAX(%esp) movl $-EFAULT,PT_EAX(%esp)
jmp resume_userspace jmp resume_userspace
END(syscall_fault)
syscall_badsys: syscall_badsys:
movl $-ENOSYS,PT_EAX(%esp) movl $-ENOSYS,PT_EAX(%esp)
jmp resume_userspace jmp resume_userspace
END(syscall_badsys)
CFI_ENDPROC CFI_ENDPROC
#define FIXUP_ESPFIX_STACK \ #define FIXUP_ESPFIX_STACK \
/* since we are on a wrong stack, we cant make it a C code :( */ \ /* since we are on a wrong stack, we cant make it a C code :( */ \
movl %gs:PDA_cpu, %ebx; \ movl %fs:PDA_cpu, %ebx; \
PER_CPU(cpu_gdt_descr, %ebx); \ PER_CPU(cpu_gdt_descr, %ebx); \
movl GDS_address(%ebx), %ebx; \ movl GDS_address(%ebx), %ebx; \
GET_DESC_BASE(GDT_ENTRY_ESPFIX_SS, %ebx, %eax, %ax, %al, %ah); \ GET_DESC_BASE(GDT_ENTRY_ESPFIX_SS, %ebx, %eax, %ax, %al, %ah); \
...@@ -581,9 +591,9 @@ syscall_badsys: ...@@ -581,9 +591,9 @@ syscall_badsys:
ENTRY(interrupt) ENTRY(interrupt)
.text .text
vector=0
ENTRY(irq_entries_start) ENTRY(irq_entries_start)
RING0_INT_FRAME RING0_INT_FRAME
vector=0
.rept NR_IRQS .rept NR_IRQS
ALIGN ALIGN
.if vector .if vector
...@@ -592,11 +602,16 @@ ENTRY(irq_entries_start) ...@@ -592,11 +602,16 @@ ENTRY(irq_entries_start)
1: pushl $~(vector) 1: pushl $~(vector)
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
jmp common_interrupt jmp common_interrupt
.data .previous
.long 1b .long 1b
.text .text
vector=vector+1 vector=vector+1
.endr .endr
END(irq_entries_start)
.previous
END(interrupt)
.previous
/* /*
* the CPU automatically disables interrupts when executing an IRQ vector, * the CPU automatically disables interrupts when executing an IRQ vector,
...@@ -609,6 +624,7 @@ common_interrupt: ...@@ -609,6 +624,7 @@ common_interrupt:
movl %esp,%eax movl %esp,%eax
call do_IRQ call do_IRQ
jmp ret_from_intr jmp ret_from_intr
ENDPROC(common_interrupt)
CFI_ENDPROC CFI_ENDPROC
#define BUILD_INTERRUPT(name, nr) \ #define BUILD_INTERRUPT(name, nr) \
...@@ -621,18 +637,24 @@ ENTRY(name) \ ...@@ -621,18 +637,24 @@ ENTRY(name) \
movl %esp,%eax; \ movl %esp,%eax; \
call smp_/**/name; \ call smp_/**/name; \
jmp ret_from_intr; \ jmp ret_from_intr; \
CFI_ENDPROC CFI_ENDPROC; \
ENDPROC(name)
/* The include is where all of the SMP etc. interrupts come from */ /* The include is where all of the SMP etc. interrupts come from */
#include "entry_arch.h" #include "entry_arch.h"
/* This alternate entry is needed because we hijack the apic LVTT */
#if defined(CONFIG_VMI) && defined(CONFIG_X86_LOCAL_APIC)
BUILD_INTERRUPT(apic_vmi_timer_interrupt,LOCAL_TIMER_VECTOR)
#endif
KPROBE_ENTRY(page_fault) KPROBE_ENTRY(page_fault)
RING0_EC_FRAME RING0_EC_FRAME
pushl $do_page_fault pushl $do_page_fault
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
ALIGN ALIGN
error_code: error_code:
/* the function address is in %gs's slot on the stack */ /* the function address is in %fs's slot on the stack */
pushl %es pushl %es
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
/*CFI_REL_OFFSET es, 0*/ /*CFI_REL_OFFSET es, 0*/
...@@ -661,20 +683,20 @@ error_code: ...@@ -661,20 +683,20 @@ error_code:
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET ebx, 0 CFI_REL_OFFSET ebx, 0
cld cld
pushl %gs pushl %fs
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
/*CFI_REL_OFFSET gs, 0*/ /*CFI_REL_OFFSET fs, 0*/
movl $(__KERNEL_PDA), %ecx movl $(__KERNEL_PDA), %ecx
movl %ecx, %gs movl %ecx, %fs
UNWIND_ESPFIX_STACK UNWIND_ESPFIX_STACK
popl %ecx popl %ecx
CFI_ADJUST_CFA_OFFSET -4 CFI_ADJUST_CFA_OFFSET -4
/*CFI_REGISTER es, ecx*/ /*CFI_REGISTER es, ecx*/
movl PT_GS(%esp), %edi # get the function address movl PT_FS(%esp), %edi # get the function address
movl PT_ORIG_EAX(%esp), %edx # get the error code movl PT_ORIG_EAX(%esp), %edx # get the error code
movl $-1, PT_ORIG_EAX(%esp) # no syscall to restart movl $-1, PT_ORIG_EAX(%esp) # no syscall to restart
mov %ecx, PT_GS(%esp) mov %ecx, PT_FS(%esp)
/*CFI_REL_OFFSET gs, ES*/ /*CFI_REL_OFFSET fs, ES*/
movl $(__USER_DS), %ecx movl $(__USER_DS), %ecx
movl %ecx, %ds movl %ecx, %ds
movl %ecx, %es movl %ecx, %es
...@@ -692,6 +714,7 @@ ENTRY(coprocessor_error) ...@@ -692,6 +714,7 @@ ENTRY(coprocessor_error)
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
jmp error_code jmp error_code
CFI_ENDPROC CFI_ENDPROC
END(coprocessor_error)
ENTRY(simd_coprocessor_error) ENTRY(simd_coprocessor_error)
RING0_INT_FRAME RING0_INT_FRAME
...@@ -701,6 +724,7 @@ ENTRY(simd_coprocessor_error) ...@@ -701,6 +724,7 @@ ENTRY(simd_coprocessor_error)
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
jmp error_code jmp error_code
CFI_ENDPROC CFI_ENDPROC
END(simd_coprocessor_error)
ENTRY(device_not_available) ENTRY(device_not_available)
RING0_INT_FRAME RING0_INT_FRAME
...@@ -721,6 +745,7 @@ device_not_available_emulate: ...@@ -721,6 +745,7 @@ device_not_available_emulate:
CFI_ADJUST_CFA_OFFSET -4 CFI_ADJUST_CFA_OFFSET -4
jmp ret_from_exception jmp ret_from_exception
CFI_ENDPROC CFI_ENDPROC
END(device_not_available)
/* /*
* Debug traps and NMI can happen at the one SYSENTER instruction * Debug traps and NMI can happen at the one SYSENTER instruction
...@@ -864,10 +889,12 @@ ENTRY(native_iret) ...@@ -864,10 +889,12 @@ ENTRY(native_iret)
.align 4 .align 4
.long 1b,iret_exc .long 1b,iret_exc
.previous .previous
END(native_iret)
ENTRY(native_irq_enable_sysexit) ENTRY(native_irq_enable_sysexit)
sti sti
sysexit sysexit
END(native_irq_enable_sysexit)
#endif #endif
KPROBE_ENTRY(int3) KPROBE_ENTRY(int3)
...@@ -890,6 +917,7 @@ ENTRY(overflow) ...@@ -890,6 +917,7 @@ ENTRY(overflow)
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
jmp error_code jmp error_code
CFI_ENDPROC CFI_ENDPROC
END(overflow)
ENTRY(bounds) ENTRY(bounds)
RING0_INT_FRAME RING0_INT_FRAME
...@@ -899,6 +927,7 @@ ENTRY(bounds) ...@@ -899,6 +927,7 @@ ENTRY(bounds)
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
jmp error_code jmp error_code
CFI_ENDPROC CFI_ENDPROC
END(bounds)
ENTRY(invalid_op) ENTRY(invalid_op)
RING0_INT_FRAME RING0_INT_FRAME
...@@ -908,6 +937,7 @@ ENTRY(invalid_op) ...@@ -908,6 +937,7 @@ ENTRY(invalid_op)
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
jmp error_code jmp error_code
CFI_ENDPROC CFI_ENDPROC
END(invalid_op)
ENTRY(coprocessor_segment_overrun) ENTRY(coprocessor_segment_overrun)
RING0_INT_FRAME RING0_INT_FRAME
...@@ -917,6 +947,7 @@ ENTRY(coprocessor_segment_overrun) ...@@ -917,6 +947,7 @@ ENTRY(coprocessor_segment_overrun)
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
jmp error_code jmp error_code
CFI_ENDPROC CFI_ENDPROC
END(coprocessor_segment_overrun)
ENTRY(invalid_TSS) ENTRY(invalid_TSS)
RING0_EC_FRAME RING0_EC_FRAME
...@@ -924,6 +955,7 @@ ENTRY(invalid_TSS) ...@@ -924,6 +955,7 @@ ENTRY(invalid_TSS)
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
jmp error_code jmp error_code
CFI_ENDPROC CFI_ENDPROC
END(invalid_TSS)
ENTRY(segment_not_present) ENTRY(segment_not_present)
RING0_EC_FRAME RING0_EC_FRAME
...@@ -931,6 +963,7 @@ ENTRY(segment_not_present) ...@@ -931,6 +963,7 @@ ENTRY(segment_not_present)
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
jmp error_code jmp error_code
CFI_ENDPROC CFI_ENDPROC
END(segment_not_present)
ENTRY(stack_segment) ENTRY(stack_segment)
RING0_EC_FRAME RING0_EC_FRAME
...@@ -938,6 +971,7 @@ ENTRY(stack_segment) ...@@ -938,6 +971,7 @@ ENTRY(stack_segment)
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
jmp error_code jmp error_code
CFI_ENDPROC CFI_ENDPROC
END(stack_segment)
KPROBE_ENTRY(general_protection) KPROBE_ENTRY(general_protection)
RING0_EC_FRAME RING0_EC_FRAME
...@@ -953,6 +987,7 @@ ENTRY(alignment_check) ...@@ -953,6 +987,7 @@ ENTRY(alignment_check)
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
jmp error_code jmp error_code
CFI_ENDPROC CFI_ENDPROC
END(alignment_check)
ENTRY(divide_error) ENTRY(divide_error)
RING0_INT_FRAME RING0_INT_FRAME
...@@ -962,6 +997,7 @@ ENTRY(divide_error) ...@@ -962,6 +997,7 @@ ENTRY(divide_error)
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
jmp error_code jmp error_code
CFI_ENDPROC CFI_ENDPROC
END(divide_error)
#ifdef CONFIG_X86_MCE #ifdef CONFIG_X86_MCE
ENTRY(machine_check) ENTRY(machine_check)
...@@ -972,6 +1008,7 @@ ENTRY(machine_check) ...@@ -972,6 +1008,7 @@ ENTRY(machine_check)
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
jmp error_code jmp error_code
CFI_ENDPROC CFI_ENDPROC
END(machine_check)
#endif #endif
ENTRY(spurious_interrupt_bug) ENTRY(spurious_interrupt_bug)
...@@ -982,6 +1019,7 @@ ENTRY(spurious_interrupt_bug) ...@@ -982,6 +1019,7 @@ ENTRY(spurious_interrupt_bug)
CFI_ADJUST_CFA_OFFSET 4 CFI_ADJUST_CFA_OFFSET 4
jmp error_code jmp error_code
CFI_ENDPROC CFI_ENDPROC
END(spurious_interrupt_bug)
ENTRY(kernel_thread_helper) ENTRY(kernel_thread_helper)
pushl $0 # fake return address for unwinder pushl $0 # fake return address for unwinder
......
...@@ -53,6 +53,7 @@ ...@@ -53,6 +53,7 @@
* any particular GDT layout, because we load our own as soon as we * any particular GDT layout, because we load our own as soon as we
* can. * can.
*/ */
.section .text.head,"ax",@progbits
ENTRY(startup_32) ENTRY(startup_32)
#ifdef CONFIG_PARAVIRT #ifdef CONFIG_PARAVIRT
...@@ -141,16 +142,25 @@ page_pde_offset = (__PAGE_OFFSET >> 20); ...@@ -141,16 +142,25 @@ page_pde_offset = (__PAGE_OFFSET >> 20);
jb 10b jb 10b
movl %edi,(init_pg_tables_end - __PAGE_OFFSET) movl %edi,(init_pg_tables_end - __PAGE_OFFSET)
#ifdef CONFIG_SMP
xorl %ebx,%ebx /* This is the boot CPU (BSP) */ xorl %ebx,%ebx /* This is the boot CPU (BSP) */
jmp 3f jmp 3f
/* /*
* Non-boot CPU entry point; entered from trampoline.S * Non-boot CPU entry point; entered from trampoline.S
* We can't lgdt here, because lgdt itself uses a data segment, but * We can't lgdt here, because lgdt itself uses a data segment, but
* we know the trampoline has already loaded the boot_gdt_table GDT * we know the trampoline has already loaded the boot_gdt_table GDT
* for us. * for us.
*
* If cpu hotplug is not supported then this code can go in init section
* which will be freed later
*/ */
#ifdef CONFIG_HOTPLUG_CPU
.section .text,"ax",@progbits
#else
.section .init.text,"ax",@progbits
#endif
#ifdef CONFIG_SMP
ENTRY(startup_32_smp) ENTRY(startup_32_smp)
cld cld
movl $(__BOOT_DS),%eax movl $(__BOOT_DS),%eax
...@@ -208,8 +218,8 @@ ENTRY(startup_32_smp) ...@@ -208,8 +218,8 @@ ENTRY(startup_32_smp)
xorl %ebx,%ebx xorl %ebx,%ebx
incl %ebx incl %ebx
3:
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
3:
/* /*
* Enable paging * Enable paging
...@@ -309,7 +319,7 @@ is386: movl $2,%ecx # set MP ...@@ -309,7 +319,7 @@ is386: movl $2,%ecx # set MP
call check_x87 call check_x87
call setup_pda call setup_pda
lgdt cpu_gdt_descr lgdt early_gdt_descr
lidt idt_descr lidt idt_descr
ljmp $(__KERNEL_CS),$1f ljmp $(__KERNEL_CS),$1f
1: movl $(__KERNEL_DS),%eax # reload all the segment registers 1: movl $(__KERNEL_DS),%eax # reload all the segment registers
...@@ -319,12 +329,12 @@ is386: movl $2,%ecx # set MP ...@@ -319,12 +329,12 @@ is386: movl $2,%ecx # set MP
movl %eax,%ds movl %eax,%ds
movl %eax,%es movl %eax,%es
xorl %eax,%eax # Clear FS and LDT xorl %eax,%eax # Clear GS and LDT
movl %eax,%fs movl %eax,%gs
lldt %ax lldt %ax
movl $(__KERNEL_PDA),%eax movl $(__KERNEL_PDA),%eax
mov %eax,%gs mov %eax,%fs
cld # gcc2 wants the direction flag cleared at all times cld # gcc2 wants the direction flag cleared at all times
pushl $0 # fake return address for unwinder pushl $0 # fake return address for unwinder
...@@ -360,12 +370,12 @@ check_x87: ...@@ -360,12 +370,12 @@ check_x87:
* cpu_gdt_table and boot_pda; for secondary CPUs, these will be * cpu_gdt_table and boot_pda; for secondary CPUs, these will be
* that CPU's GDT and PDA. * that CPU's GDT and PDA.
*/ */
setup_pda: ENTRY(setup_pda)
/* get the PDA pointer */ /* get the PDA pointer */
movl start_pda, %eax movl start_pda, %eax
/* slot the PDA address into the GDT */ /* slot the PDA address into the GDT */
mov cpu_gdt_descr+2, %ecx mov early_gdt_descr+2, %ecx
mov %ax, (__KERNEL_PDA+0+2)(%ecx) /* base & 0x0000ffff */ mov %ax, (__KERNEL_PDA+0+2)(%ecx) /* base & 0x0000ffff */
shr $16, %eax shr $16, %eax
mov %al, (__KERNEL_PDA+4+0)(%ecx) /* base & 0x00ff0000 */ mov %al, (__KERNEL_PDA+4+0)(%ecx) /* base & 0x00ff0000 */
...@@ -492,6 +502,7 @@ ignore_int: ...@@ -492,6 +502,7 @@ ignore_int:
#endif #endif
iret iret
.section .text
#ifdef CONFIG_PARAVIRT #ifdef CONFIG_PARAVIRT
startup_paravirt: startup_paravirt:
cld cld
...@@ -502,10 +513,11 @@ startup_paravirt: ...@@ -502,10 +513,11 @@ startup_paravirt:
pushl %ecx pushl %ecx
pushl %eax pushl %eax
/* paravirt.o is last in link, and that probe fn never returns */
pushl $__start_paravirtprobe pushl $__start_paravirtprobe
1: 1:
movl 0(%esp), %eax movl 0(%esp), %eax
cmpl $__stop_paravirtprobe, %eax
je unhandled_paravirt
pushl (%eax) pushl (%eax)
movl 8(%esp), %eax movl 8(%esp), %eax
call *(%esp) call *(%esp)
...@@ -517,6 +529,10 @@ startup_paravirt: ...@@ -517,6 +529,10 @@ startup_paravirt:
addl $4, (%esp) addl $4, (%esp)
jmp 1b jmp 1b
unhandled_paravirt:
/* Nothing wanted us: we're screwed. */
ud2
#endif #endif
/* /*
...@@ -581,7 +597,7 @@ idt_descr: ...@@ -581,7 +597,7 @@ idt_descr:
# boot GDT descriptor (later on used by CPU#0): # boot GDT descriptor (later on used by CPU#0):
.word 0 # 32 bit align gdt_desc.address .word 0 # 32 bit align gdt_desc.address
ENTRY(cpu_gdt_descr) ENTRY(early_gdt_descr)
.word GDT_ENTRIES*8-1 .word GDT_ENTRIES*8-1
.long cpu_gdt_table .long cpu_gdt_table
......
...@@ -1920,7 +1920,7 @@ static void __init setup_ioapic_ids_from_mpc(void) ...@@ -1920,7 +1920,7 @@ static void __init setup_ioapic_ids_from_mpc(void)
static void __init setup_ioapic_ids_from_mpc(void) { } static void __init setup_ioapic_ids_from_mpc(void) { }
#endif #endif
static int no_timer_check __initdata; int no_timer_check __initdata;
static int __init notimercheck(char *s) static int __init notimercheck(char *s)
{ {
...@@ -2310,7 +2310,7 @@ static inline void __init check_timer(void) ...@@ -2310,7 +2310,7 @@ static inline void __init check_timer(void)
disable_8259A_irq(0); disable_8259A_irq(0);
set_irq_chip_and_handler_name(0, &lapic_chip, handle_fasteoi_irq, set_irq_chip_and_handler_name(0, &lapic_chip, handle_fasteoi_irq,
"fasteio"); "fasteoi");
apic_write_around(APIC_LVT0, APIC_DM_FIXED | vector); /* Fixed mode */ apic_write_around(APIC_LVT0, APIC_DM_FIXED | vector); /* Fixed mode */
enable_8259A_irq(0); enable_8259A_irq(0);
......
...@@ -19,6 +19,8 @@ ...@@ -19,6 +19,8 @@
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <asm/idle.h>
DEFINE_PER_CPU(irq_cpustat_t, irq_stat) ____cacheline_internodealigned_in_smp; DEFINE_PER_CPU(irq_cpustat_t, irq_stat) ____cacheline_internodealigned_in_smp;
EXPORT_PER_CPU_SYMBOL(irq_stat); EXPORT_PER_CPU_SYMBOL(irq_stat);
...@@ -61,6 +63,7 @@ fastcall unsigned int do_IRQ(struct pt_regs *regs) ...@@ -61,6 +63,7 @@ fastcall unsigned int do_IRQ(struct pt_regs *regs)
union irq_ctx *curctx, *irqctx; union irq_ctx *curctx, *irqctx;
u32 *isp; u32 *isp;
#endif #endif
exit_idle();
if (unlikely((unsigned)irq >= NR_IRQS)) { if (unlikely((unsigned)irq >= NR_IRQS)) {
printk(KERN_EMERG "%s: cannot handle IRQ %d\n", printk(KERN_EMERG "%s: cannot handle IRQ %d\n",
......
...@@ -363,7 +363,7 @@ static int __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -363,7 +363,7 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
" pushf\n" " pushf\n"
/* skip cs, eip, orig_eax */ /* skip cs, eip, orig_eax */
" subl $12, %esp\n" " subl $12, %esp\n"
" pushl %gs\n" " pushl %fs\n"
" pushl %ds\n" " pushl %ds\n"
" pushl %es\n" " pushl %es\n"
" pushl %eax\n" " pushl %eax\n"
...@@ -387,7 +387,7 @@ static int __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -387,7 +387,7 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
" popl %edi\n" " popl %edi\n"
" popl %ebp\n" " popl %ebp\n"
" popl %eax\n" " popl %eax\n"
/* skip eip, orig_eax, es, ds, gs */ /* skip eip, orig_eax, es, ds, fs */
" addl $20, %esp\n" " addl $20, %esp\n"
" popf\n" " popf\n"
" ret\n"); " ret\n");
...@@ -408,7 +408,7 @@ fastcall void *__kprobes trampoline_handler(struct pt_regs *regs) ...@@ -408,7 +408,7 @@ fastcall void *__kprobes trampoline_handler(struct pt_regs *regs)
spin_lock_irqsave(&kretprobe_lock, flags); spin_lock_irqsave(&kretprobe_lock, flags);
head = kretprobe_inst_table_head(current); head = kretprobe_inst_table_head(current);
/* fixup registers */ /* fixup registers */
regs->xcs = __KERNEL_CS; regs->xcs = __KERNEL_CS | get_kernel_rpl();
regs->eip = trampoline_address; regs->eip = trampoline_address;
regs->orig_eax = 0xffffffff; regs->orig_eax = 0xffffffff;
......
...@@ -384,7 +384,7 @@ static int do_microcode_update (void) ...@@ -384,7 +384,7 @@ static int do_microcode_update (void)
{ {
long cursor = 0; long cursor = 0;
int error = 0; int error = 0;
void *new_mc; void *new_mc = NULL;
int cpu; int cpu;
cpumask_t old; cpumask_t old;
......
...@@ -68,7 +68,6 @@ static inline int rdmsr_eio(u32 reg, u32 *eax, u32 *edx) ...@@ -68,7 +68,6 @@ static inline int rdmsr_eio(u32 reg, u32 *eax, u32 *edx)
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
struct msr_command { struct msr_command {
int cpu;
int err; int err;
u32 reg; u32 reg;
u32 data[2]; u32 data[2];
...@@ -78,16 +77,14 @@ static void msr_smp_wrmsr(void *cmd_block) ...@@ -78,16 +77,14 @@ static void msr_smp_wrmsr(void *cmd_block)
{ {
struct msr_command *cmd = (struct msr_command *)cmd_block; struct msr_command *cmd = (struct msr_command *)cmd_block;
if (cmd->cpu == smp_processor_id()) cmd->err = wrmsr_eio(cmd->reg, cmd->data[0], cmd->data[1]);
cmd->err = wrmsr_eio(cmd->reg, cmd->data[0], cmd->data[1]);
} }
static void msr_smp_rdmsr(void *cmd_block) static void msr_smp_rdmsr(void *cmd_block)
{ {
struct msr_command *cmd = (struct msr_command *)cmd_block; struct msr_command *cmd = (struct msr_command *)cmd_block;
if (cmd->cpu == smp_processor_id()) cmd->err = rdmsr_eio(cmd->reg, &cmd->data[0], &cmd->data[1]);
cmd->err = rdmsr_eio(cmd->reg, &cmd->data[0], &cmd->data[1]);
} }
static inline int do_wrmsr(int cpu, u32 reg, u32 eax, u32 edx) static inline int do_wrmsr(int cpu, u32 reg, u32 eax, u32 edx)
...@@ -99,12 +96,11 @@ static inline int do_wrmsr(int cpu, u32 reg, u32 eax, u32 edx) ...@@ -99,12 +96,11 @@ static inline int do_wrmsr(int cpu, u32 reg, u32 eax, u32 edx)
if (cpu == smp_processor_id()) { if (cpu == smp_processor_id()) {
ret = wrmsr_eio(reg, eax, edx); ret = wrmsr_eio(reg, eax, edx);
} else { } else {
cmd.cpu = cpu;
cmd.reg = reg; cmd.reg = reg;
cmd.data[0] = eax; cmd.data[0] = eax;
cmd.data[1] = edx; cmd.data[1] = edx;
smp_call_function(msr_smp_wrmsr, &cmd, 1, 1); smp_call_function_single(cpu, msr_smp_wrmsr, &cmd, 1, 1);
ret = cmd.err; ret = cmd.err;
} }
preempt_enable(); preempt_enable();
...@@ -120,10 +116,9 @@ static inline int do_rdmsr(int cpu, u32 reg, u32 * eax, u32 * edx) ...@@ -120,10 +116,9 @@ static inline int do_rdmsr(int cpu, u32 reg, u32 * eax, u32 * edx)
if (cpu == smp_processor_id()) { if (cpu == smp_processor_id()) {
ret = rdmsr_eio(reg, eax, edx); ret = rdmsr_eio(reg, eax, edx);
} else { } else {
cmd.cpu = cpu;
cmd.reg = reg; cmd.reg = reg;
smp_call_function(msr_smp_rdmsr, &cmd, 1, 1); smp_call_function_single(cpu, msr_smp_rdmsr, &cmd, 1, 1);
*eax = cmd.data[0]; *eax = cmd.data[0];
*edx = cmd.data[1]; *edx = cmd.data[1];
......
...@@ -185,7 +185,8 @@ static __cpuinit inline int nmi_known_cpu(void) ...@@ -185,7 +185,8 @@ static __cpuinit inline int nmi_known_cpu(void)
{ {
switch (boot_cpu_data.x86_vendor) { switch (boot_cpu_data.x86_vendor) {
case X86_VENDOR_AMD: case X86_VENDOR_AMD:
return ((boot_cpu_data.x86 == 15) || (boot_cpu_data.x86 == 6)); return ((boot_cpu_data.x86 == 15) || (boot_cpu_data.x86 == 6)
|| (boot_cpu_data.x86 == 16));
case X86_VENDOR_INTEL: case X86_VENDOR_INTEL:
if (cpu_has(&boot_cpu_data, X86_FEATURE_ARCH_PERFMON)) if (cpu_has(&boot_cpu_data, X86_FEATURE_ARCH_PERFMON))
return 1; return 1;
...@@ -216,6 +217,28 @@ static __init void nmi_cpu_busy(void *data) ...@@ -216,6 +217,28 @@ static __init void nmi_cpu_busy(void *data)
} }
#endif #endif
static unsigned int adjust_for_32bit_ctr(unsigned int hz)
{
u64 counter_val;
unsigned int retval = hz;
/*
* On Intel CPUs with P6/ARCH_PERFMON only 32 bits in the counter
* are writable, with higher bits sign extending from bit 31.
* So, we can only program the counter with 31 bit values and
* 32nd bit should be 1, for 33.. to be 1.
* Find the appropriate nmi_hz
*/
counter_val = (u64)cpu_khz * 1000;
do_div(counter_val, retval);
if (counter_val > 0x7fffffffULL) {
u64 count = (u64)cpu_khz * 1000;
do_div(count, 0x7fffffffUL);
retval = count + 1;
}
return retval;
}
static int __init check_nmi_watchdog(void) static int __init check_nmi_watchdog(void)
{ {
unsigned int *prev_nmi_count; unsigned int *prev_nmi_count;
...@@ -281,18 +304,10 @@ static int __init check_nmi_watchdog(void) ...@@ -281,18 +304,10 @@ static int __init check_nmi_watchdog(void)
struct nmi_watchdog_ctlblk *wd = &__get_cpu_var(nmi_watchdog_ctlblk); struct nmi_watchdog_ctlblk *wd = &__get_cpu_var(nmi_watchdog_ctlblk);
nmi_hz = 1; nmi_hz = 1;
/*
* On Intel CPUs with ARCH_PERFMON only 32 bits in the counter if (wd->perfctr_msr == MSR_P6_PERFCTR0 ||
* are writable, with higher bits sign extending from bit 31. wd->perfctr_msr == MSR_ARCH_PERFMON_PERFCTR0) {
* So, we can only program the counter with 31 bit values and nmi_hz = adjust_for_32bit_ctr(nmi_hz);
* 32nd bit should be 1, for 33.. to be 1.
* Find the appropriate nmi_hz
*/
if (wd->perfctr_msr == MSR_ARCH_PERFMON_PERFCTR0 &&
((u64)cpu_khz * 1000) > 0x7fffffffULL) {
u64 count = (u64)cpu_khz * 1000;
do_div(count, 0x7fffffffUL);
nmi_hz = count + 1;
} }
} }
...@@ -369,6 +384,34 @@ void enable_timer_nmi_watchdog(void) ...@@ -369,6 +384,34 @@ void enable_timer_nmi_watchdog(void)
} }
} }
static void __acpi_nmi_disable(void *__unused)
{
apic_write_around(APIC_LVT0, APIC_DM_NMI | APIC_LVT_MASKED);
}
/*
* Disable timer based NMIs on all CPUs:
*/
void acpi_nmi_disable(void)
{
if (atomic_read(&nmi_active) && nmi_watchdog == NMI_IO_APIC)
on_each_cpu(__acpi_nmi_disable, NULL, 0, 1);
}
static void __acpi_nmi_enable(void *__unused)
{
apic_write_around(APIC_LVT0, APIC_DM_NMI);
}
/*
* Enable timer based NMIs on all CPUs:
*/
void acpi_nmi_enable(void)
{
if (atomic_read(&nmi_active) && nmi_watchdog == NMI_IO_APIC)
on_each_cpu(__acpi_nmi_enable, NULL, 0, 1);
}
#ifdef CONFIG_PM #ifdef CONFIG_PM
static int nmi_pm_active; /* nmi_active before suspend */ static int nmi_pm_active; /* nmi_active before suspend */
...@@ -442,6 +485,17 @@ static void write_watchdog_counter(unsigned int perfctr_msr, const char *descr) ...@@ -442,6 +485,17 @@ static void write_watchdog_counter(unsigned int perfctr_msr, const char *descr)
wrmsrl(perfctr_msr, 0 - count); wrmsrl(perfctr_msr, 0 - count);
} }
static void write_watchdog_counter32(unsigned int perfctr_msr,
const char *descr)
{
u64 count = (u64)cpu_khz * 1000;
do_div(count, nmi_hz);
if(descr)
Dprintk("setting %s to -0x%08Lx\n", descr, count);
wrmsr(perfctr_msr, (u32)(-count), 0);
}
/* Note that these events don't tick when the CPU idles. This means /* Note that these events don't tick when the CPU idles. This means
the frequency varies with CPU load. */ the frequency varies with CPU load. */
...@@ -531,7 +585,8 @@ static int setup_p6_watchdog(void) ...@@ -531,7 +585,8 @@ static int setup_p6_watchdog(void)
/* setup the timer */ /* setup the timer */
wrmsr(evntsel_msr, evntsel, 0); wrmsr(evntsel_msr, evntsel, 0);
write_watchdog_counter(perfctr_msr, "P6_PERFCTR0"); nmi_hz = adjust_for_32bit_ctr(nmi_hz);
write_watchdog_counter32(perfctr_msr, "P6_PERFCTR0");
apic_write(APIC_LVTPC, APIC_DM_NMI); apic_write(APIC_LVTPC, APIC_DM_NMI);
evntsel |= P6_EVNTSEL0_ENABLE; evntsel |= P6_EVNTSEL0_ENABLE;
wrmsr(evntsel_msr, evntsel, 0); wrmsr(evntsel_msr, evntsel, 0);
...@@ -704,7 +759,8 @@ static int setup_intel_arch_watchdog(void) ...@@ -704,7 +759,8 @@ static int setup_intel_arch_watchdog(void)
/* setup the timer */ /* setup the timer */
wrmsr(evntsel_msr, evntsel, 0); wrmsr(evntsel_msr, evntsel, 0);
write_watchdog_counter(perfctr_msr, "INTEL_ARCH_PERFCTR0"); nmi_hz = adjust_for_32bit_ctr(nmi_hz);
write_watchdog_counter32(perfctr_msr, "INTEL_ARCH_PERFCTR0");
apic_write(APIC_LVTPC, APIC_DM_NMI); apic_write(APIC_LVTPC, APIC_DM_NMI);
evntsel |= ARCH_PERFMON_EVENTSEL0_ENABLE; evntsel |= ARCH_PERFMON_EVENTSEL0_ENABLE;
wrmsr(evntsel_msr, evntsel, 0); wrmsr(evntsel_msr, evntsel, 0);
...@@ -762,7 +818,8 @@ void setup_apic_nmi_watchdog (void *unused) ...@@ -762,7 +818,8 @@ void setup_apic_nmi_watchdog (void *unused)
if (nmi_watchdog == NMI_LOCAL_APIC) { if (nmi_watchdog == NMI_LOCAL_APIC) {
switch (boot_cpu_data.x86_vendor) { switch (boot_cpu_data.x86_vendor) {
case X86_VENDOR_AMD: case X86_VENDOR_AMD:
if (boot_cpu_data.x86 != 6 && boot_cpu_data.x86 != 15) if (boot_cpu_data.x86 != 6 && boot_cpu_data.x86 != 15 &&
boot_cpu_data.x86 != 16)
return; return;
if (!setup_k7_watchdog()) if (!setup_k7_watchdog())
return; return;
...@@ -956,6 +1013,8 @@ __kprobes int nmi_watchdog_tick(struct pt_regs * regs, unsigned reason) ...@@ -956,6 +1013,8 @@ __kprobes int nmi_watchdog_tick(struct pt_regs * regs, unsigned reason)
dummy &= ~P4_CCCR_OVF; dummy &= ~P4_CCCR_OVF;
wrmsrl(wd->cccr_msr, dummy); wrmsrl(wd->cccr_msr, dummy);
apic_write(APIC_LVTPC, APIC_DM_NMI); apic_write(APIC_LVTPC, APIC_DM_NMI);
/* start the cycle over again */
write_watchdog_counter(wd->perfctr_msr, NULL);
} }
else if (wd->perfctr_msr == MSR_P6_PERFCTR0 || else if (wd->perfctr_msr == MSR_P6_PERFCTR0 ||
wd->perfctr_msr == MSR_ARCH_PERFMON_PERFCTR0) { wd->perfctr_msr == MSR_ARCH_PERFMON_PERFCTR0) {
...@@ -964,9 +1023,12 @@ __kprobes int nmi_watchdog_tick(struct pt_regs * regs, unsigned reason) ...@@ -964,9 +1023,12 @@ __kprobes int nmi_watchdog_tick(struct pt_regs * regs, unsigned reason)
* other P6 variant. * other P6 variant.
* ArchPerfom/Core Duo also needs this */ * ArchPerfom/Core Duo also needs this */
apic_write(APIC_LVTPC, APIC_DM_NMI); apic_write(APIC_LVTPC, APIC_DM_NMI);
/* P6/ARCH_PERFMON has 32 bit counter write */
write_watchdog_counter32(wd->perfctr_msr, NULL);
} else {
/* start the cycle over again */
write_watchdog_counter(wd->perfctr_msr, NULL);
} }
/* start the cycle over again */
write_watchdog_counter(wd->perfctr_msr, NULL);
rc = 1; rc = 1;
} else if (nmi_watchdog == NMI_IO_APIC) { } else if (nmi_watchdog == NMI_IO_APIC) {
/* don't know how to accurately check for this. /* don't know how to accurately check for this.
......
This diff is collapsed.
#include <linux/platform_device.h>
#include <linux/errno.h>
#include <linux/init.h>
static __init int add_pcspkr(void)
{
struct platform_device *pd;
int ret;
pd = platform_device_alloc("pcspkr", -1);
if (!pd)
return -ENOMEM;
ret = platform_device_add(pd);
if (ret)
platform_device_put(pd);
return ret;
}
device_initcall(add_pcspkr);
...@@ -48,6 +48,7 @@ ...@@ -48,6 +48,7 @@
#include <asm/i387.h> #include <asm/i387.h>
#include <asm/desc.h> #include <asm/desc.h>
#include <asm/vm86.h> #include <asm/vm86.h>
#include <asm/idle.h>
#ifdef CONFIG_MATH_EMULATION #ifdef CONFIG_MATH_EMULATION
#include <asm/math_emu.h> #include <asm/math_emu.h>
#endif #endif
...@@ -80,6 +81,42 @@ void (*pm_idle)(void); ...@@ -80,6 +81,42 @@ void (*pm_idle)(void);
EXPORT_SYMBOL(pm_idle); EXPORT_SYMBOL(pm_idle);
static DEFINE_PER_CPU(unsigned int, cpu_idle_state); static DEFINE_PER_CPU(unsigned int, cpu_idle_state);
static ATOMIC_NOTIFIER_HEAD(idle_notifier);
void idle_notifier_register(struct notifier_block *n)
{
atomic_notifier_chain_register(&idle_notifier, n);
}
void idle_notifier_unregister(struct notifier_block *n)
{
atomic_notifier_chain_unregister(&idle_notifier, n);
}
static DEFINE_PER_CPU(volatile unsigned long, idle_state);
void enter_idle(void)
{
/* needs to be atomic w.r.t. interrupts, not against other CPUs */
__set_bit(0, &__get_cpu_var(idle_state));
atomic_notifier_call_chain(&idle_notifier, IDLE_START, NULL);
}
static void __exit_idle(void)
{
/* needs to be atomic w.r.t. interrupts, not against other CPUs */
if (__test_and_clear_bit(0, &__get_cpu_var(idle_state)) == 0)
return;
atomic_notifier_call_chain(&idle_notifier, IDLE_END, NULL);
}
void exit_idle(void)
{
if (current->pid)
return;
__exit_idle();
}
void disable_hlt(void) void disable_hlt(void)
{ {
hlt_counter++; hlt_counter++;
...@@ -130,6 +167,7 @@ EXPORT_SYMBOL(default_idle); ...@@ -130,6 +167,7 @@ EXPORT_SYMBOL(default_idle);
*/ */
static void poll_idle (void) static void poll_idle (void)
{ {
local_irq_enable();
cpu_relax(); cpu_relax();
} }
...@@ -189,7 +227,16 @@ void cpu_idle(void) ...@@ -189,7 +227,16 @@ void cpu_idle(void)
play_dead(); play_dead();
__get_cpu_var(irq_stat).idle_timestamp = jiffies; __get_cpu_var(irq_stat).idle_timestamp = jiffies;
/*
* Idle routines should keep interrupts disabled
* from here on, until they go to idle.
* Otherwise, idle callbacks can misfire.
*/
local_irq_disable();
enter_idle();
idle(); idle();
__exit_idle();
} }
preempt_enable_no_resched(); preempt_enable_no_resched();
schedule(); schedule();
...@@ -243,7 +290,11 @@ void mwait_idle_with_hints(unsigned long eax, unsigned long ecx) ...@@ -243,7 +290,11 @@ void mwait_idle_with_hints(unsigned long eax, unsigned long ecx)
__monitor((void *)&current_thread_info()->flags, 0, 0); __monitor((void *)&current_thread_info()->flags, 0, 0);
smp_mb(); smp_mb();
if (!need_resched()) if (!need_resched())
__mwait(eax, ecx); __sti_mwait(eax, ecx);
else
local_irq_enable();
} else {
local_irq_enable();
} }
} }
...@@ -308,8 +359,8 @@ void show_regs(struct pt_regs * regs) ...@@ -308,8 +359,8 @@ void show_regs(struct pt_regs * regs)
regs->eax,regs->ebx,regs->ecx,regs->edx); regs->eax,regs->ebx,regs->ecx,regs->edx);
printk("ESI: %08lx EDI: %08lx EBP: %08lx", printk("ESI: %08lx EDI: %08lx EBP: %08lx",
regs->esi, regs->edi, regs->ebp); regs->esi, regs->edi, regs->ebp);
printk(" DS: %04x ES: %04x GS: %04x\n", printk(" DS: %04x ES: %04x FS: %04x\n",
0xffff & regs->xds,0xffff & regs->xes, 0xffff & regs->xgs); 0xffff & regs->xds,0xffff & regs->xes, 0xffff & regs->xfs);
cr0 = read_cr0(); cr0 = read_cr0();
cr2 = read_cr2(); cr2 = read_cr2();
...@@ -340,7 +391,7 @@ int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags) ...@@ -340,7 +391,7 @@ int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags)
regs.xds = __USER_DS; regs.xds = __USER_DS;
regs.xes = __USER_DS; regs.xes = __USER_DS;
regs.xgs = __KERNEL_PDA; regs.xfs = __KERNEL_PDA;
regs.orig_eax = -1; regs.orig_eax = -1;
regs.eip = (unsigned long) kernel_thread_helper; regs.eip = (unsigned long) kernel_thread_helper;
regs.xcs = __KERNEL_CS | get_kernel_rpl(); regs.xcs = __KERNEL_CS | get_kernel_rpl();
...@@ -425,7 +476,7 @@ int copy_thread(int nr, unsigned long clone_flags, unsigned long esp, ...@@ -425,7 +476,7 @@ int copy_thread(int nr, unsigned long clone_flags, unsigned long esp,
p->thread.eip = (unsigned long) ret_from_fork; p->thread.eip = (unsigned long) ret_from_fork;
savesegment(fs,p->thread.fs); savesegment(gs,p->thread.gs);
tsk = current; tsk = current;
if (unlikely(test_tsk_thread_flag(tsk, TIF_IO_BITMAP))) { if (unlikely(test_tsk_thread_flag(tsk, TIF_IO_BITMAP))) {
...@@ -501,8 +552,8 @@ void dump_thread(struct pt_regs * regs, struct user * dump) ...@@ -501,8 +552,8 @@ void dump_thread(struct pt_regs * regs, struct user * dump)
dump->regs.eax = regs->eax; dump->regs.eax = regs->eax;
dump->regs.ds = regs->xds; dump->regs.ds = regs->xds;
dump->regs.es = regs->xes; dump->regs.es = regs->xes;
savesegment(fs,dump->regs.fs); dump->regs.fs = regs->xfs;
dump->regs.gs = regs->xgs; savesegment(gs,dump->regs.gs);
dump->regs.orig_eax = regs->orig_eax; dump->regs.orig_eax = regs->orig_eax;
dump->regs.eip = regs->eip; dump->regs.eip = regs->eip;
dump->regs.cs = regs->xcs; dump->regs.cs = regs->xcs;
...@@ -653,7 +704,7 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas ...@@ -653,7 +704,7 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas
load_esp0(tss, next); load_esp0(tss, next);
/* /*
* Save away %fs. No need to save %gs, as it was saved on the * Save away %gs. No need to save %fs, as it was saved on the
* stack on entry. No need to save %es and %ds, as those are * stack on entry. No need to save %es and %ds, as those are
* always kernel segments while inside the kernel. Doing this * always kernel segments while inside the kernel. Doing this
* before setting the new TLS descriptors avoids the situation * before setting the new TLS descriptors avoids the situation
...@@ -662,7 +713,7 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas ...@@ -662,7 +713,7 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas
* used %fs or %gs (it does not today), or if the kernel is * used %fs or %gs (it does not today), or if the kernel is
* running inside of a hypervisor layer. * running inside of a hypervisor layer.
*/ */
savesegment(fs, prev->fs); savesegment(gs, prev->gs);
/* /*
* Load the per-thread Thread-Local Storage descriptor. * Load the per-thread Thread-Local Storage descriptor.
...@@ -670,14 +721,13 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas ...@@ -670,14 +721,13 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas
load_TLS(next, cpu); load_TLS(next, cpu);
/* /*
* Restore %fs if needed. * Restore IOPL if needed. In normal use, the flags restore
* * in the switch assembly will handle this. But if the kernel
* Glibc normally makes %fs be zero. * is running virtualized at a non-zero CPL, the popf will
* not restore flags, so it must be done in a separate step.
*/ */
if (unlikely(prev->fs | next->fs)) if (get_kernel_rpl() && unlikely(prev->iopl != next->iopl))
loadsegment(fs, next->fs); set_iopl_mask(next->iopl);
write_pda(pcurrent, next_p);
/* /*
* Now maybe handle debug registers and/or IO bitmaps * Now maybe handle debug registers and/or IO bitmaps
...@@ -688,6 +738,15 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas ...@@ -688,6 +738,15 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas
disable_tsc(prev_p, next_p); disable_tsc(prev_p, next_p);
/*
* Leave lazy mode, flushing any hypercalls made here.
* This must be done before restoring TLS segments so
* the GDT and LDT are properly updated, and must be
* done before math_state_restore, so the TS bit is up
* to date.
*/
arch_leave_lazy_cpu_mode();
/* If the task has used fpu the last 5 timeslices, just do a full /* If the task has used fpu the last 5 timeslices, just do a full
* restore of the math state immediately to avoid the trap; the * restore of the math state immediately to avoid the trap; the
* chances of needing FPU soon are obviously high now * chances of needing FPU soon are obviously high now
...@@ -695,6 +754,14 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas ...@@ -695,6 +754,14 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas
if (next_p->fpu_counter > 5) if (next_p->fpu_counter > 5)
math_state_restore(); math_state_restore();
/*
* Restore %gs if needed (which is common)
*/
if (prev->gs | next->gs)
loadsegment(gs, next->gs);
write_pda(pcurrent, next_p);
return prev_p; return prev_p;
} }
......
...@@ -89,14 +89,14 @@ static int putreg(struct task_struct *child, ...@@ -89,14 +89,14 @@ static int putreg(struct task_struct *child,
unsigned long regno, unsigned long value) unsigned long regno, unsigned long value)
{ {
switch (regno >> 2) { switch (regno >> 2) {
case FS: case GS:
if (value && (value & 3) != 3) if (value && (value & 3) != 3)
return -EIO; return -EIO;
child->thread.fs = value; child->thread.gs = value;
return 0; return 0;
case DS: case DS:
case ES: case ES:
case GS: case FS:
if (value && (value & 3) != 3) if (value && (value & 3) != 3)
return -EIO; return -EIO;
value &= 0xffff; value &= 0xffff;
...@@ -112,7 +112,7 @@ static int putreg(struct task_struct *child, ...@@ -112,7 +112,7 @@ static int putreg(struct task_struct *child,
value |= get_stack_long(child, EFL_OFFSET) & ~FLAG_MASK; value |= get_stack_long(child, EFL_OFFSET) & ~FLAG_MASK;
break; break;
} }
if (regno > ES*4) if (regno > FS*4)
regno -= 1*4; regno -= 1*4;
put_stack_long(child, regno, value); put_stack_long(child, regno, value);
return 0; return 0;
...@@ -124,18 +124,18 @@ static unsigned long getreg(struct task_struct *child, ...@@ -124,18 +124,18 @@ static unsigned long getreg(struct task_struct *child,
unsigned long retval = ~0UL; unsigned long retval = ~0UL;
switch (regno >> 2) { switch (regno >> 2) {
case FS: case GS:
retval = child->thread.fs; retval = child->thread.gs;
break; break;
case DS: case DS:
case ES: case ES:
case GS: case FS:
case SS: case SS:
case CS: case CS:
retval = 0xffff; retval = 0xffff;
/* fall through */ /* fall through */
default: default:
if (regno > ES*4) if (regno > FS*4)
regno -= 1*4; regno -= 1*4;
retval &= get_stack_long(child, regno); retval &= get_stack_long(child, regno);
} }
......
...@@ -33,7 +33,6 @@ ...@@ -33,7 +33,6 @@
#include <linux/initrd.h> #include <linux/initrd.h>
#include <linux/bootmem.h> #include <linux/bootmem.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/platform_device.h>
#include <linux/console.h> #include <linux/console.h>
#include <linux/mca.h> #include <linux/mca.h>
#include <linux/root_dev.h> #include <linux/root_dev.h>
...@@ -60,6 +59,7 @@ ...@@ -60,6 +59,7 @@
#include <asm/io_apic.h> #include <asm/io_apic.h>
#include <asm/ist.h> #include <asm/ist.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/vmi.h>
#include <setup_arch.h> #include <setup_arch.h>
#include <bios_ebda.h> #include <bios_ebda.h>
...@@ -581,6 +581,14 @@ void __init setup_arch(char **cmdline_p) ...@@ -581,6 +581,14 @@ void __init setup_arch(char **cmdline_p)
max_low_pfn = setup_memory(); max_low_pfn = setup_memory();
#ifdef CONFIG_VMI
/*
* Must be after max_low_pfn is determined, and before kernel
* pagetables are setup.
*/
vmi_init();
#endif
/* /*
* NOTE: before this point _nobody_ is allowed to allocate * NOTE: before this point _nobody_ is allowed to allocate
* any memory using the bootmem allocator. Although the * any memory using the bootmem allocator. Although the
...@@ -651,28 +659,3 @@ void __init setup_arch(char **cmdline_p) ...@@ -651,28 +659,3 @@ void __init setup_arch(char **cmdline_p)
#endif #endif
tsc_init(); tsc_init();
} }
static __init int add_pcspkr(void)
{
struct platform_device *pd;
int ret;
pd = platform_device_alloc("pcspkr", -1);
if (!pd)
return -ENOMEM;
ret = platform_device_add(pd);
if (ret)
platform_device_put(pd);
return ret;
}
device_initcall(add_pcspkr);
/*
* Local Variables:
* mode:c
* c-file-style:"k&r"
* c-basic-offset:8
* End:
*/
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#include <linux/suspend.h> #include <linux/suspend.h>
#include <linux/ptrace.h> #include <linux/ptrace.h>
#include <linux/elf.h> #include <linux/elf.h>
#include <linux/binfmts.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/ucontext.h> #include <asm/ucontext.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
...@@ -128,8 +129,8 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, int *peax ...@@ -128,8 +129,8 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, int *peax
X86_EFLAGS_TF | X86_EFLAGS_SF | X86_EFLAGS_ZF | \ X86_EFLAGS_TF | X86_EFLAGS_SF | X86_EFLAGS_ZF | \
X86_EFLAGS_AF | X86_EFLAGS_PF | X86_EFLAGS_CF) X86_EFLAGS_AF | X86_EFLAGS_PF | X86_EFLAGS_CF)
COPY_SEG(gs); GET_SEG(gs);
GET_SEG(fs); COPY_SEG(fs);
COPY_SEG(es); COPY_SEG(es);
COPY_SEG(ds); COPY_SEG(ds);
COPY(edi); COPY(edi);
...@@ -244,9 +245,9 @@ setup_sigcontext(struct sigcontext __user *sc, struct _fpstate __user *fpstate, ...@@ -244,9 +245,9 @@ setup_sigcontext(struct sigcontext __user *sc, struct _fpstate __user *fpstate,
{ {
int tmp, err = 0; int tmp, err = 0;
err |= __put_user(regs->xgs, (unsigned int __user *)&sc->gs); err |= __put_user(regs->xfs, (unsigned int __user *)&sc->fs);
savesegment(fs, tmp); savesegment(gs, tmp);
err |= __put_user(tmp, (unsigned int __user *)&sc->fs); err |= __put_user(tmp, (unsigned int __user *)&sc->gs);
err |= __put_user(regs->xes, (unsigned int __user *)&sc->es); err |= __put_user(regs->xes, (unsigned int __user *)&sc->es);
err |= __put_user(regs->xds, (unsigned int __user *)&sc->ds); err |= __put_user(regs->xds, (unsigned int __user *)&sc->ds);
...@@ -349,7 +350,10 @@ static int setup_frame(int sig, struct k_sigaction *ka, ...@@ -349,7 +350,10 @@ static int setup_frame(int sig, struct k_sigaction *ka,
goto give_sigsegv; goto give_sigsegv;
} }
restorer = (void *)VDSO_SYM(&__kernel_sigreturn); if (current->binfmt->hasvdso)
restorer = (void *)VDSO_SYM(&__kernel_sigreturn);
else
restorer = (void *)&frame->retcode;
if (ka->sa.sa_flags & SA_RESTORER) if (ka->sa.sa_flags & SA_RESTORER)
restorer = ka->sa.sa_restorer; restorer = ka->sa.sa_restorer;
......
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
#include <asm/mtrr.h> #include <asm/mtrr.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include <asm/idle.h>
#include <mach_apic.h> #include <mach_apic.h>
/* /*
...@@ -374,8 +375,7 @@ static void flush_tlb_others(cpumask_t cpumask, struct mm_struct *mm, ...@@ -374,8 +375,7 @@ static void flush_tlb_others(cpumask_t cpumask, struct mm_struct *mm,
/* /*
* i'm not happy about this global shared spinlock in the * i'm not happy about this global shared spinlock in the
* MM hot path, but we'll see how contended it is. * MM hot path, but we'll see how contended it is.
* Temporarily this turns IRQs off, so that lockups are * AK: x86-64 has a faster method that could be ported.
* detected by the NMI watchdog.
*/ */
spin_lock(&tlbstate_lock); spin_lock(&tlbstate_lock);
...@@ -400,7 +400,7 @@ static void flush_tlb_others(cpumask_t cpumask, struct mm_struct *mm, ...@@ -400,7 +400,7 @@ static void flush_tlb_others(cpumask_t cpumask, struct mm_struct *mm,
while (!cpus_empty(flush_cpumask)) while (!cpus_empty(flush_cpumask))
/* nothing. lockup detection does not belong here */ /* nothing. lockup detection does not belong here */
mb(); cpu_relax();
flush_mm = NULL; flush_mm = NULL;
flush_va = 0; flush_va = 0;
...@@ -624,6 +624,7 @@ fastcall void smp_call_function_interrupt(struct pt_regs *regs) ...@@ -624,6 +624,7 @@ fastcall void smp_call_function_interrupt(struct pt_regs *regs)
/* /*
* At this point the info structure may be out of scope unless wait==1 * At this point the info structure may be out of scope unless wait==1
*/ */
exit_idle();
irq_enter(); irq_enter();
(*func)(info); (*func)(info);
irq_exit(); irq_exit();
......
...@@ -63,6 +63,7 @@ ...@@ -63,6 +63,7 @@
#include <mach_apic.h> #include <mach_apic.h>
#include <mach_wakecpu.h> #include <mach_wakecpu.h>
#include <smpboot_hooks.h> #include <smpboot_hooks.h>
#include <asm/vmi.h>
/* Set if we find a B stepping CPU */ /* Set if we find a B stepping CPU */
static int __devinitdata smp_b_stepping; static int __devinitdata smp_b_stepping;
...@@ -545,12 +546,15 @@ static void __cpuinit start_secondary(void *unused) ...@@ -545,12 +546,15 @@ static void __cpuinit start_secondary(void *unused)
* booting is too fragile that we want to limit the * booting is too fragile that we want to limit the
* things done here to the most necessary things. * things done here to the most necessary things.
*/ */
#ifdef CONFIG_VMI
vmi_bringup();
#endif
secondary_cpu_init(); secondary_cpu_init();
preempt_disable(); preempt_disable();
smp_callin(); smp_callin();
while (!cpu_isset(smp_processor_id(), smp_commenced_mask)) while (!cpu_isset(smp_processor_id(), smp_commenced_mask))
rep_nop(); rep_nop();
setup_secondary_APIC_clock(); setup_secondary_clock();
if (nmi_watchdog == NMI_IO_APIC) { if (nmi_watchdog == NMI_IO_APIC) {
disable_8259A_irq(0); disable_8259A_irq(0);
enable_NMI_through_LVT0(NULL); enable_NMI_through_LVT0(NULL);
...@@ -619,7 +623,6 @@ extern struct { ...@@ -619,7 +623,6 @@ extern struct {
unsigned short ss; unsigned short ss;
} stack_start; } stack_start;
extern struct i386_pda *start_pda; extern struct i386_pda *start_pda;
extern struct Xgt_desc_struct cpu_gdt_descr;
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
...@@ -834,6 +837,13 @@ wakeup_secondary_cpu(int phys_apicid, unsigned long start_eip) ...@@ -834,6 +837,13 @@ wakeup_secondary_cpu(int phys_apicid, unsigned long start_eip)
else else
num_starts = 0; num_starts = 0;
/*
* Paravirt / VMI wants a startup IPI hook here to set up the
* target processor state.
*/
startup_ipi_hook(phys_apicid, (unsigned long) start_secondary,
(unsigned long) stack_start.esp);
/* /*
* Run STARTUP IPI loop. * Run STARTUP IPI loop.
*/ */
...@@ -1320,7 +1330,7 @@ static void __init smp_boot_cpus(unsigned int max_cpus) ...@@ -1320,7 +1330,7 @@ static void __init smp_boot_cpus(unsigned int max_cpus)
smpboot_setup_io_apic(); smpboot_setup_io_apic();
setup_boot_APIC_clock(); setup_boot_clock();
/* /*
* Synchronize the TSC with the AP * Synchronize the TSC with the AP
......
...@@ -78,7 +78,7 @@ int __init sysenter_setup(void) ...@@ -78,7 +78,7 @@ int __init sysenter_setup(void)
syscall_pages[0] = virt_to_page(syscall_page); syscall_pages[0] = virt_to_page(syscall_page);
#ifdef CONFIG_COMPAT_VDSO #ifdef CONFIG_COMPAT_VDSO
__set_fixmap(FIX_VDSO, __pa(syscall_page), PAGE_READONLY); __set_fixmap(FIX_VDSO, __pa(syscall_page), PAGE_READONLY_EXEC);
printk("Compat vDSO mapped to %08lx.\n", __fix_to_virt(FIX_VDSO)); printk("Compat vDSO mapped to %08lx.\n", __fix_to_virt(FIX_VDSO));
#endif #endif
......
...@@ -131,15 +131,13 @@ unsigned long profile_pc(struct pt_regs *regs) ...@@ -131,15 +131,13 @@ unsigned long profile_pc(struct pt_regs *regs)
unsigned long pc = instruction_pointer(regs); unsigned long pc = instruction_pointer(regs);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
if (!user_mode_vm(regs) && in_lock_functions(pc)) { if (!v8086_mode(regs) && SEGMENT_IS_KERNEL_CODE(regs->xcs) &&
in_lock_functions(pc)) {
#ifdef CONFIG_FRAME_POINTER #ifdef CONFIG_FRAME_POINTER
return *(unsigned long *)(regs->ebp + 4); return *(unsigned long *)(regs->ebp + 4);
#else #else
unsigned long *sp; unsigned long *sp = (unsigned long *)&regs->esp;
if ((regs->xcs & 3) == 0)
sp = (unsigned long *)&regs->esp;
else
sp = (unsigned long *)regs->esp;
/* Return address is either directly at stack pointer /* Return address is either directly at stack pointer
or above a saved eflags. Eflags has bits 22-31 zero, or above a saved eflags. Eflags has bits 22-31 zero,
kernel addresses don't. */ kernel addresses don't. */
...@@ -232,6 +230,7 @@ EXPORT_SYMBOL(get_cmos_time); ...@@ -232,6 +230,7 @@ EXPORT_SYMBOL(get_cmos_time);
static void sync_cmos_clock(unsigned long dummy); static void sync_cmos_clock(unsigned long dummy);
static DEFINE_TIMER(sync_cmos_timer, sync_cmos_clock, 0, 0); static DEFINE_TIMER(sync_cmos_timer, sync_cmos_clock, 0, 0);
int no_sync_cmos_clock;
static void sync_cmos_clock(unsigned long dummy) static void sync_cmos_clock(unsigned long dummy)
{ {
...@@ -275,7 +274,8 @@ static void sync_cmos_clock(unsigned long dummy) ...@@ -275,7 +274,8 @@ static void sync_cmos_clock(unsigned long dummy)
void notify_arch_cmos_timer(void) void notify_arch_cmos_timer(void)
{ {
mod_timer(&sync_cmos_timer, jiffies + 1); if (!no_sync_cmos_clock)
mod_timer(&sync_cmos_timer, jiffies + 1);
} }
static long clock_cmos_diff; static long clock_cmos_diff;
......
...@@ -94,6 +94,7 @@ asmlinkage void spurious_interrupt_bug(void); ...@@ -94,6 +94,7 @@ asmlinkage void spurious_interrupt_bug(void);
asmlinkage void machine_check(void); asmlinkage void machine_check(void);
int kstack_depth_to_print = 24; int kstack_depth_to_print = 24;
static unsigned int code_bytes = 64;
ATOMIC_NOTIFIER_HEAD(i386die_chain); ATOMIC_NOTIFIER_HEAD(i386die_chain);
int register_die_notifier(struct notifier_block *nb) int register_die_notifier(struct notifier_block *nb)
...@@ -291,10 +292,11 @@ void show_registers(struct pt_regs *regs) ...@@ -291,10 +292,11 @@ void show_registers(struct pt_regs *regs)
int i; int i;
int in_kernel = 1; int in_kernel = 1;
unsigned long esp; unsigned long esp;
unsigned short ss; unsigned short ss, gs;
esp = (unsigned long) (&regs->esp); esp = (unsigned long) (&regs->esp);
savesegment(ss, ss); savesegment(ss, ss);
savesegment(gs, gs);
if (user_mode_vm(regs)) { if (user_mode_vm(regs)) {
in_kernel = 0; in_kernel = 0;
esp = regs->esp; esp = regs->esp;
...@@ -313,8 +315,8 @@ void show_registers(struct pt_regs *regs) ...@@ -313,8 +315,8 @@ void show_registers(struct pt_regs *regs)
regs->eax, regs->ebx, regs->ecx, regs->edx); regs->eax, regs->ebx, regs->ecx, regs->edx);
printk(KERN_EMERG "esi: %08lx edi: %08lx ebp: %08lx esp: %08lx\n", printk(KERN_EMERG "esi: %08lx edi: %08lx ebp: %08lx esp: %08lx\n",
regs->esi, regs->edi, regs->ebp, esp); regs->esi, regs->edi, regs->ebp, esp);
printk(KERN_EMERG "ds: %04x es: %04x ss: %04x\n", printk(KERN_EMERG "ds: %04x es: %04x fs: %04x gs: %04x ss: %04x\n",
regs->xds & 0xffff, regs->xes & 0xffff, ss); regs->xds & 0xffff, regs->xes & 0xffff, regs->xfs & 0xffff, gs, ss);
printk(KERN_EMERG "Process %.*s (pid: %d, ti=%p task=%p task.ti=%p)", printk(KERN_EMERG "Process %.*s (pid: %d, ti=%p task=%p task.ti=%p)",
TASK_COMM_LEN, current->comm, current->pid, TASK_COMM_LEN, current->comm, current->pid,
current_thread_info(), current, current->thread_info); current_thread_info(), current, current->thread_info);
...@@ -324,7 +326,8 @@ void show_registers(struct pt_regs *regs) ...@@ -324,7 +326,8 @@ void show_registers(struct pt_regs *regs)
*/ */
if (in_kernel) { if (in_kernel) {
u8 *eip; u8 *eip;
int code_bytes = 64; unsigned int code_prologue = code_bytes * 43 / 64;
unsigned int code_len = code_bytes;
unsigned char c; unsigned char c;
printk("\n" KERN_EMERG "Stack: "); printk("\n" KERN_EMERG "Stack: ");
...@@ -332,14 +335,14 @@ void show_registers(struct pt_regs *regs) ...@@ -332,14 +335,14 @@ void show_registers(struct pt_regs *regs)
printk(KERN_EMERG "Code: "); printk(KERN_EMERG "Code: ");
eip = (u8 *)regs->eip - 43; eip = (u8 *)regs->eip - code_prologue;
if (eip < (u8 *)PAGE_OFFSET || if (eip < (u8 *)PAGE_OFFSET ||
probe_kernel_address(eip, c)) { probe_kernel_address(eip, c)) {
/* try starting at EIP */ /* try starting at EIP */
eip = (u8 *)regs->eip; eip = (u8 *)regs->eip;
code_bytes = 32; code_len = code_len - code_prologue + 1;
} }
for (i = 0; i < code_bytes; i++, eip++) { for (i = 0; i < code_len; i++, eip++) {
if (eip < (u8 *)PAGE_OFFSET || if (eip < (u8 *)PAGE_OFFSET ||
probe_kernel_address(eip, c)) { probe_kernel_address(eip, c)) {
printk(" Bad EIP value."); printk(" Bad EIP value.");
...@@ -1191,3 +1194,13 @@ static int __init kstack_setup(char *s) ...@@ -1191,3 +1194,13 @@ static int __init kstack_setup(char *s)
return 1; return 1;
} }
__setup("kstack=", kstack_setup); __setup("kstack=", kstack_setup);
static int __init code_bytes_setup(char *s)
{
code_bytes = simple_strtoul(s, NULL, 0);
if (code_bytes > 8192)
code_bytes = 8192;
return 1;
}
__setup("code_bytes=", code_bytes_setup);
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
* an extra value to store the TSC freq * an extra value to store the TSC freq
*/ */
unsigned int tsc_khz; unsigned int tsc_khz;
unsigned long long (*custom_sched_clock)(void);
int tsc_disable; int tsc_disable;
...@@ -107,14 +108,14 @@ unsigned long long sched_clock(void) ...@@ -107,14 +108,14 @@ unsigned long long sched_clock(void)
{ {
unsigned long long this_offset; unsigned long long this_offset;
if (unlikely(custom_sched_clock))
return (*custom_sched_clock)();
/* /*
* in the NUMA case we dont use the TSC as they are not * Fall back to jiffies if there's no TSC available:
* synchronized across all CPUs.
*/ */
#ifndef CONFIG_NUMA if (unlikely(tsc_disable))
if (!cpu_khz || check_tsc_unstable()) /* No locking but a rare wrong value is not a big deal: */
#endif
/* no locking but a rare wrong value is not a big deal */
return (jiffies_64 - INITIAL_JIFFIES) * (1000000000 / HZ); return (jiffies_64 - INITIAL_JIFFIES) * (1000000000 / HZ);
/* read the Time Stamp Counter: */ /* read the Time Stamp Counter: */
...@@ -194,13 +195,13 @@ EXPORT_SYMBOL(recalibrate_cpu_khz); ...@@ -194,13 +195,13 @@ EXPORT_SYMBOL(recalibrate_cpu_khz);
void __init tsc_init(void) void __init tsc_init(void)
{ {
if (!cpu_has_tsc || tsc_disable) if (!cpu_has_tsc || tsc_disable)
return; goto out_no_tsc;
cpu_khz = calculate_cpu_khz(); cpu_khz = calculate_cpu_khz();
tsc_khz = cpu_khz; tsc_khz = cpu_khz;
if (!cpu_khz) if (!cpu_khz)
return; goto out_no_tsc;
printk("Detected %lu.%03lu MHz processor.\n", printk("Detected %lu.%03lu MHz processor.\n",
(unsigned long)cpu_khz / 1000, (unsigned long)cpu_khz / 1000,
...@@ -208,6 +209,15 @@ void __init tsc_init(void) ...@@ -208,6 +209,15 @@ void __init tsc_init(void)
set_cyc2ns_scale(cpu_khz); set_cyc2ns_scale(cpu_khz);
use_tsc_delay(); use_tsc_delay();
return;
out_no_tsc:
/*
* Set the tsc_disable flag if there's no TSC support, this
* makes it a fast flag for the kernel to see whether it
* should be using the TSC.
*/
tsc_disable = 1;
} }
#ifdef CONFIG_CPU_FREQ #ifdef CONFIG_CPU_FREQ
......
...@@ -96,12 +96,12 @@ static int copy_vm86_regs_to_user(struct vm86_regs __user *user, ...@@ -96,12 +96,12 @@ static int copy_vm86_regs_to_user(struct vm86_regs __user *user,
{ {
int ret = 0; int ret = 0;
/* kernel_vm86_regs is missing xfs, so copy everything up to /* kernel_vm86_regs is missing xgs, so copy everything up to
(but not including) xgs, and then rest after xgs. */ (but not including) orig_eax, and then rest including orig_eax. */
ret += copy_to_user(user, regs, offsetof(struct kernel_vm86_regs, pt.xgs)); ret += copy_to_user(user, regs, offsetof(struct kernel_vm86_regs, pt.orig_eax));
ret += copy_to_user(&user->__null_gs, &regs->pt.xgs, ret += copy_to_user(&user->orig_eax, &regs->pt.orig_eax,
sizeof(struct kernel_vm86_regs) - sizeof(struct kernel_vm86_regs) -
offsetof(struct kernel_vm86_regs, pt.xgs)); offsetof(struct kernel_vm86_regs, pt.orig_eax));
return ret; return ret;
} }
...@@ -113,12 +113,13 @@ static int copy_vm86_regs_from_user(struct kernel_vm86_regs *regs, ...@@ -113,12 +113,13 @@ static int copy_vm86_regs_from_user(struct kernel_vm86_regs *regs,
{ {
int ret = 0; int ret = 0;
ret += copy_from_user(regs, user, offsetof(struct kernel_vm86_regs, pt.xgs)); /* copy eax-xfs inclusive */
ret += copy_from_user(&regs->pt.xgs, &user->__null_gs, ret += copy_from_user(regs, user, offsetof(struct kernel_vm86_regs, pt.orig_eax));
/* copy orig_eax-__gsh+extra */
ret += copy_from_user(&regs->pt.orig_eax, &user->orig_eax,
sizeof(struct kernel_vm86_regs) - sizeof(struct kernel_vm86_regs) -
offsetof(struct kernel_vm86_regs, pt.xgs) + offsetof(struct kernel_vm86_regs, pt.orig_eax) +
extra); extra);
return ret; return ret;
} }
...@@ -157,8 +158,8 @@ struct pt_regs * fastcall save_v86_state(struct kernel_vm86_regs * regs) ...@@ -157,8 +158,8 @@ struct pt_regs * fastcall save_v86_state(struct kernel_vm86_regs * regs)
ret = KVM86->regs32; ret = KVM86->regs32;
loadsegment(fs, current->thread.saved_fs); ret->xfs = current->thread.saved_fs;
ret->xgs = current->thread.saved_gs; loadsegment(gs, current->thread.saved_gs);
return ret; return ret;
} }
...@@ -285,9 +286,9 @@ static void do_sys_vm86(struct kernel_vm86_struct *info, struct task_struct *tsk ...@@ -285,9 +286,9 @@ static void do_sys_vm86(struct kernel_vm86_struct *info, struct task_struct *tsk
*/ */
info->regs.pt.xds = 0; info->regs.pt.xds = 0;
info->regs.pt.xes = 0; info->regs.pt.xes = 0;
info->regs.pt.xgs = 0; info->regs.pt.xfs = 0;
/* we are clearing fs later just before "jmp resume_userspace", /* we are clearing gs later just before "jmp resume_userspace",
* because it is not saved/restored. * because it is not saved/restored.
*/ */
...@@ -321,8 +322,8 @@ static void do_sys_vm86(struct kernel_vm86_struct *info, struct task_struct *tsk ...@@ -321,8 +322,8 @@ static void do_sys_vm86(struct kernel_vm86_struct *info, struct task_struct *tsk
*/ */
info->regs32->eax = 0; info->regs32->eax = 0;
tsk->thread.saved_esp0 = tsk->thread.esp0; tsk->thread.saved_esp0 = tsk->thread.esp0;
savesegment(fs, tsk->thread.saved_fs); tsk->thread.saved_fs = info->regs32->xfs;
tsk->thread.saved_gs = info->regs32->xgs; savesegment(gs, tsk->thread.saved_gs);
tss = &per_cpu(init_tss, get_cpu()); tss = &per_cpu(init_tss, get_cpu());
tsk->thread.esp0 = (unsigned long) &info->VM86_TSS_ESP0; tsk->thread.esp0 = (unsigned long) &info->VM86_TSS_ESP0;
...@@ -342,7 +343,7 @@ static void do_sys_vm86(struct kernel_vm86_struct *info, struct task_struct *tsk ...@@ -342,7 +343,7 @@ static void do_sys_vm86(struct kernel_vm86_struct *info, struct task_struct *tsk
__asm__ __volatile__( __asm__ __volatile__(
"movl %0,%%esp\n\t" "movl %0,%%esp\n\t"
"movl %1,%%ebp\n\t" "movl %1,%%ebp\n\t"
"mov %2, %%fs\n\t" "mov %2, %%gs\n\t"
"jmp resume_userspace" "jmp resume_userspace"
: /* no outputs */ : /* no outputs */
:"r" (&info->regs), "r" (task_thread_info(tsk)), "r" (0)); :"r" (&info->regs), "r" (task_thread_info(tsk)), "r" (0));
......
This diff is collapsed.
This diff is collapsed.
...@@ -37,9 +37,14 @@ SECTIONS ...@@ -37,9 +37,14 @@ SECTIONS
{ {
. = LOAD_OFFSET + LOAD_PHYSICAL_ADDR; . = LOAD_OFFSET + LOAD_PHYSICAL_ADDR;
phys_startup_32 = startup_32 - LOAD_OFFSET; phys_startup_32 = startup_32 - LOAD_OFFSET;
.text.head : AT(ADDR(.text.head) - LOAD_OFFSET) {
_text = .; /* Text and read-only data */
*(.text.head)
} :text = 0x9090
/* read-only */ /* read-only */
.text : AT(ADDR(.text) - LOAD_OFFSET) { .text : AT(ADDR(.text) - LOAD_OFFSET) {
_text = .; /* Text and read-only data */
*(.text) *(.text)
SCHED_TEXT SCHED_TEXT
LOCK_TEXT LOCK_TEXT
......
...@@ -56,15 +56,14 @@ static int reg_offset_vm86[] = { ...@@ -56,15 +56,14 @@ static int reg_offset_vm86[] = {
#define VM86_REG_(x) (*(unsigned short *) \ #define VM86_REG_(x) (*(unsigned short *) \
(reg_offset_vm86[((unsigned)x)]+(u_char *) FPU_info)) (reg_offset_vm86[((unsigned)x)]+(u_char *) FPU_info))
/* These are dummy, fs and gs are not saved on the stack. */ /* This dummy, gs is not saved on the stack. */
#define ___FS ___ds
#define ___GS ___ds #define ___GS ___ds
static int reg_offset_pm[] = { static int reg_offset_pm[] = {
offsetof(struct info,___cs), offsetof(struct info,___cs),
offsetof(struct info,___ds), offsetof(struct info,___ds),
offsetof(struct info,___es), offsetof(struct info,___es),
offsetof(struct info,___FS), offsetof(struct info,___fs),
offsetof(struct info,___GS), offsetof(struct info,___GS),
offsetof(struct info,___ss), offsetof(struct info,___ss),
offsetof(struct info,___ds) offsetof(struct info,___ds)
...@@ -169,13 +168,10 @@ static long pm_address(u_char FPU_modrm, u_char segment, ...@@ -169,13 +168,10 @@ static long pm_address(u_char FPU_modrm, u_char segment,
switch ( segment ) switch ( segment )
{ {
/* fs and gs aren't used by the kernel, so they still have their /* gs isn't used by the kernel, so it still has its
user-space values. */ user-space value. */
case PREFIX_FS_-1:
/* N.B. - movl %seg, mem is a 2 byte write regardless of prefix */
savesegment(fs, addr->selector);
break;
case PREFIX_GS_-1: case PREFIX_GS_-1:
/* N.B. - movl %seg, mem is a 2 byte write regardless of prefix */
savesegment(gs, addr->selector); savesegment(gs, addr->selector);
break; break;
default: default:
......
...@@ -48,9 +48,11 @@ ...@@ -48,9 +48,11 @@
#define status_word() \ #define status_word() \
((partial_status & ~SW_Top & 0xffff) | ((top << SW_Top_Shift) & SW_Top)) ((partial_status & ~SW_Top & 0xffff) | ((top << SW_Top_Shift) & SW_Top))
#define setcc(cc) ({ \ static inline void setcc(int cc)
partial_status &= ~(SW_C0|SW_C1|SW_C2|SW_C3); \ {
partial_status |= (cc) & (SW_C0|SW_C1|SW_C2|SW_C3); }) partial_status &= ~(SW_C0|SW_C1|SW_C2|SW_C3);
partial_status |= (cc) & (SW_C0|SW_C1|SW_C2|SW_C3);
}
#ifdef PECULIAR_486 #ifdef PECULIAR_486
/* Default, this conveys no information, but an 80486 does it. */ /* Default, this conveys no information, but an 80486 does it. */
......
...@@ -101,7 +101,6 @@ extern void find_max_pfn(void); ...@@ -101,7 +101,6 @@ extern void find_max_pfn(void);
extern void add_one_highpage_init(struct page *, int, int); extern void add_one_highpage_init(struct page *, int, int);
extern struct e820map e820; extern struct e820map e820;
extern unsigned long init_pg_tables_end;
extern unsigned long highend_pfn, highstart_pfn; extern unsigned long highend_pfn, highstart_pfn;
extern unsigned long max_low_pfn; extern unsigned long max_low_pfn;
extern unsigned long totalram_pages; extern unsigned long totalram_pages;
......
...@@ -46,17 +46,17 @@ int unregister_page_fault_notifier(struct notifier_block *nb) ...@@ -46,17 +46,17 @@ int unregister_page_fault_notifier(struct notifier_block *nb)
} }
EXPORT_SYMBOL_GPL(unregister_page_fault_notifier); EXPORT_SYMBOL_GPL(unregister_page_fault_notifier);
static inline int notify_page_fault(enum die_val val, const char *str, static inline int notify_page_fault(struct pt_regs *regs, long err)
struct pt_regs *regs, long err, int trap, int sig)
{ {
struct die_args args = { struct die_args args = {
.regs = regs, .regs = regs,
.str = str, .str = "page fault",
.err = err, .err = err,
.trapnr = trap, .trapnr = 14,
.signr = sig .signr = SIGSEGV
}; };
return atomic_notifier_call_chain(&notify_page_fault_chain, val, &args); return atomic_notifier_call_chain(&notify_page_fault_chain,
DIE_PAGE_FAULT, &args);
} }
/* /*
...@@ -327,8 +327,7 @@ fastcall void __kprobes do_page_fault(struct pt_regs *regs, ...@@ -327,8 +327,7 @@ fastcall void __kprobes do_page_fault(struct pt_regs *regs,
if (unlikely(address >= TASK_SIZE)) { if (unlikely(address >= TASK_SIZE)) {
if (!(error_code & 0x0000000d) && vmalloc_fault(address) >= 0) if (!(error_code & 0x0000000d) && vmalloc_fault(address) >= 0)
return; return;
if (notify_page_fault(DIE_PAGE_FAULT, "page fault", regs, error_code, 14, if (notify_page_fault(regs, error_code) == NOTIFY_STOP)
SIGSEGV) == NOTIFY_STOP)
return; return;
/* /*
* Don't take the mm semaphore here. If we fixup a prefetch * Don't take the mm semaphore here. If we fixup a prefetch
...@@ -337,8 +336,7 @@ fastcall void __kprobes do_page_fault(struct pt_regs *regs, ...@@ -337,8 +336,7 @@ fastcall void __kprobes do_page_fault(struct pt_regs *regs,
goto bad_area_nosemaphore; goto bad_area_nosemaphore;
} }
if (notify_page_fault(DIE_PAGE_FAULT, "page fault", regs, error_code, 14, if (notify_page_fault(regs, error_code) == NOTIFY_STOP)
SIGSEGV) == NOTIFY_STOP)
return; return;
/* It's safe to allow irq's after cr2 has been saved and the vmalloc /* It's safe to allow irq's after cr2 has been saved and the vmalloc
......
...@@ -62,6 +62,7 @@ static pmd_t * __init one_md_table_init(pgd_t *pgd) ...@@ -62,6 +62,7 @@ static pmd_t * __init one_md_table_init(pgd_t *pgd)
#ifdef CONFIG_X86_PAE #ifdef CONFIG_X86_PAE
pmd_table = (pmd_t *) alloc_bootmem_low_pages(PAGE_SIZE); pmd_table = (pmd_t *) alloc_bootmem_low_pages(PAGE_SIZE);
paravirt_alloc_pd(__pa(pmd_table) >> PAGE_SHIFT);
set_pgd(pgd, __pgd(__pa(pmd_table) | _PAGE_PRESENT)); set_pgd(pgd, __pgd(__pa(pmd_table) | _PAGE_PRESENT));
pud = pud_offset(pgd, 0); pud = pud_offset(pgd, 0);
if (pmd_table != pmd_offset(pud, 0)) if (pmd_table != pmd_offset(pud, 0))
...@@ -82,6 +83,7 @@ static pte_t * __init one_page_table_init(pmd_t *pmd) ...@@ -82,6 +83,7 @@ static pte_t * __init one_page_table_init(pmd_t *pmd)
{ {
if (pmd_none(*pmd)) { if (pmd_none(*pmd)) {
pte_t *page_table = (pte_t *) alloc_bootmem_low_pages(PAGE_SIZE); pte_t *page_table = (pte_t *) alloc_bootmem_low_pages(PAGE_SIZE);
paravirt_alloc_pt(__pa(page_table) >> PAGE_SHIFT);
set_pmd(pmd, __pmd(__pa(page_table) | _PAGE_TABLE)); set_pmd(pmd, __pmd(__pa(page_table) | _PAGE_TABLE));
if (page_table != pte_offset_kernel(pmd, 0)) if (page_table != pte_offset_kernel(pmd, 0))
BUG(); BUG();
...@@ -345,6 +347,8 @@ static void __init pagetable_init (void) ...@@ -345,6 +347,8 @@ static void __init pagetable_init (void)
/* Init entries of the first-level page table to the zero page */ /* Init entries of the first-level page table to the zero page */
for (i = 0; i < PTRS_PER_PGD; i++) for (i = 0; i < PTRS_PER_PGD; i++)
set_pgd(pgd_base + i, __pgd(__pa(empty_zero_page) | _PAGE_PRESENT)); set_pgd(pgd_base + i, __pgd(__pa(empty_zero_page) | _PAGE_PRESENT));
#else
paravirt_alloc_pd(__pa(swapper_pg_dir) >> PAGE_SHIFT);
#endif #endif
/* Enable PSE if available */ /* Enable PSE if available */
......
...@@ -60,6 +60,7 @@ static struct page *split_large_page(unsigned long address, pgprot_t prot, ...@@ -60,6 +60,7 @@ static struct page *split_large_page(unsigned long address, pgprot_t prot,
address = __pa(address); address = __pa(address);
addr = address & LARGE_PAGE_MASK; addr = address & LARGE_PAGE_MASK;
pbase = (pte_t *)page_address(base); pbase = (pte_t *)page_address(base);
paravirt_alloc_pt(page_to_pfn(base));
for (i = 0; i < PTRS_PER_PTE; i++, addr += PAGE_SIZE) { for (i = 0; i < PTRS_PER_PTE; i++, addr += PAGE_SIZE) {
set_pte(&pbase[i], pfn_pte(addr >> PAGE_SHIFT, set_pte(&pbase[i], pfn_pte(addr >> PAGE_SHIFT,
addr == address ? prot : ref_prot)); addr == address ? prot : ref_prot));
...@@ -172,6 +173,7 @@ __change_page_attr(struct page *page, pgprot_t prot) ...@@ -172,6 +173,7 @@ __change_page_attr(struct page *page, pgprot_t prot)
if (!PageReserved(kpte_page)) { if (!PageReserved(kpte_page)) {
if (cpu_has_pse && (page_private(kpte_page) == 0)) { if (cpu_has_pse && (page_private(kpte_page) == 0)) {
ClearPagePrivate(kpte_page); ClearPagePrivate(kpte_page);
paravirt_release_pt(page_to_pfn(kpte_page));
list_add(&kpte_page->lru, &df_list); list_add(&kpte_page->lru, &df_list);
revert_page(kpte_page, address); revert_page(kpte_page, address);
} }
......
...@@ -171,6 +171,8 @@ void __set_fixmap (enum fixed_addresses idx, unsigned long phys, pgprot_t flags) ...@@ -171,6 +171,8 @@ void __set_fixmap (enum fixed_addresses idx, unsigned long phys, pgprot_t flags)
void reserve_top_address(unsigned long reserve) void reserve_top_address(unsigned long reserve)
{ {
BUG_ON(fixmaps > 0); BUG_ON(fixmaps > 0);
printk(KERN_INFO "Reserving virtual address space above 0x%08x\n",
(int)-reserve);
#ifdef CONFIG_COMPAT_VDSO #ifdef CONFIG_COMPAT_VDSO
BUG_ON(reserve != 0); BUG_ON(reserve != 0);
#else #else
...@@ -248,9 +250,15 @@ void pgd_ctor(void *pgd, struct kmem_cache *cache, unsigned long unused) ...@@ -248,9 +250,15 @@ void pgd_ctor(void *pgd, struct kmem_cache *cache, unsigned long unused)
clone_pgd_range((pgd_t *)pgd + USER_PTRS_PER_PGD, clone_pgd_range((pgd_t *)pgd + USER_PTRS_PER_PGD,
swapper_pg_dir + USER_PTRS_PER_PGD, swapper_pg_dir + USER_PTRS_PER_PGD,
KERNEL_PGD_PTRS); KERNEL_PGD_PTRS);
if (PTRS_PER_PMD > 1) if (PTRS_PER_PMD > 1)
return; return;
/* must happen under lock */
paravirt_alloc_pd_clone(__pa(pgd) >> PAGE_SHIFT,
__pa(swapper_pg_dir) >> PAGE_SHIFT,
USER_PTRS_PER_PGD, PTRS_PER_PGD - USER_PTRS_PER_PGD);
pgd_list_add(pgd); pgd_list_add(pgd);
spin_unlock_irqrestore(&pgd_lock, flags); spin_unlock_irqrestore(&pgd_lock, flags);
} }
...@@ -260,6 +268,7 @@ void pgd_dtor(void *pgd, struct kmem_cache *cache, unsigned long unused) ...@@ -260,6 +268,7 @@ void pgd_dtor(void *pgd, struct kmem_cache *cache, unsigned long unused)
{ {
unsigned long flags; /* can be called from interrupt context */ unsigned long flags; /* can be called from interrupt context */
paravirt_release_pd(__pa(pgd) >> PAGE_SHIFT);
spin_lock_irqsave(&pgd_lock, flags); spin_lock_irqsave(&pgd_lock, flags);
pgd_list_del(pgd); pgd_list_del(pgd);
spin_unlock_irqrestore(&pgd_lock, flags); spin_unlock_irqrestore(&pgd_lock, flags);
...@@ -277,13 +286,18 @@ pgd_t *pgd_alloc(struct mm_struct *mm) ...@@ -277,13 +286,18 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
pmd_t *pmd = kmem_cache_alloc(pmd_cache, GFP_KERNEL); pmd_t *pmd = kmem_cache_alloc(pmd_cache, GFP_KERNEL);
if (!pmd) if (!pmd)
goto out_oom; goto out_oom;
paravirt_alloc_pd(__pa(pmd) >> PAGE_SHIFT);
set_pgd(&pgd[i], __pgd(1 + __pa(pmd))); set_pgd(&pgd[i], __pgd(1 + __pa(pmd)));
} }
return pgd; return pgd;
out_oom: out_oom:
for (i--; i >= 0; i--) for (i--; i >= 0; i--) {
kmem_cache_free(pmd_cache, (void *)__va(pgd_val(pgd[i])-1)); pgd_t pgdent = pgd[i];
void* pmd = (void *)__va(pgd_val(pgdent)-1);
paravirt_release_pd(__pa(pmd) >> PAGE_SHIFT);
kmem_cache_free(pmd_cache, pmd);
}
kmem_cache_free(pgd_cache, pgd); kmem_cache_free(pgd_cache, pgd);
return NULL; return NULL;
} }
...@@ -294,8 +308,12 @@ void pgd_free(pgd_t *pgd) ...@@ -294,8 +308,12 @@ void pgd_free(pgd_t *pgd)
/* in the PAE case user pgd entries are overwritten before usage */ /* in the PAE case user pgd entries are overwritten before usage */
if (PTRS_PER_PMD > 1) if (PTRS_PER_PMD > 1)
for (i = 0; i < USER_PTRS_PER_PGD; ++i) for (i = 0; i < USER_PTRS_PER_PGD; ++i) {
kmem_cache_free(pmd_cache, (void *)__va(pgd_val(pgd[i])-1)); pgd_t pgdent = pgd[i];
void* pmd = (void *)__va(pgd_val(pgdent)-1);
paravirt_release_pd(__pa(pmd) >> PAGE_SHIFT);
kmem_cache_free(pmd_cache, pmd);
}
/* in the non-PAE case, free_pgtables() clears user pgd entries */ /* in the non-PAE case, free_pgtables() clears user pgd entries */
kmem_cache_free(pgd_cache, pgd); kmem_cache_free(pgd_cache, pgd);
} }
...@@ -24,7 +24,8 @@ ...@@ -24,7 +24,8 @@
#define CTR_IS_RESERVED(msrs,c) (msrs->counters[(c)].addr ? 1 : 0) #define CTR_IS_RESERVED(msrs,c) (msrs->counters[(c)].addr ? 1 : 0)
#define CTR_READ(l,h,msrs,c) do {rdmsr(msrs->counters[(c)].addr, (l), (h));} while (0) #define CTR_READ(l,h,msrs,c) do {rdmsr(msrs->counters[(c)].addr, (l), (h));} while (0)
#define CTR_WRITE(l,msrs,c) do {wrmsr(msrs->counters[(c)].addr, -(u32)(l), -1);} while (0) #define CTR_32BIT_WRITE(l,msrs,c) \
do {wrmsr(msrs->counters[(c)].addr, -(u32)(l), 0);} while (0)
#define CTR_OVERFLOWED(n) (!((n) & (1U<<31))) #define CTR_OVERFLOWED(n) (!((n) & (1U<<31)))
#define CTRL_IS_RESERVED(msrs,c) (msrs->controls[(c)].addr ? 1 : 0) #define CTRL_IS_RESERVED(msrs,c) (msrs->controls[(c)].addr ? 1 : 0)
...@@ -79,7 +80,7 @@ static void ppro_setup_ctrs(struct op_msrs const * const msrs) ...@@ -79,7 +80,7 @@ static void ppro_setup_ctrs(struct op_msrs const * const msrs)
for (i = 0; i < NUM_COUNTERS; ++i) { for (i = 0; i < NUM_COUNTERS; ++i) {
if (unlikely(!CTR_IS_RESERVED(msrs,i))) if (unlikely(!CTR_IS_RESERVED(msrs,i)))
continue; continue;
CTR_WRITE(1, msrs, i); CTR_32BIT_WRITE(1, msrs, i);
} }
/* enable active counters */ /* enable active counters */
...@@ -87,7 +88,7 @@ static void ppro_setup_ctrs(struct op_msrs const * const msrs) ...@@ -87,7 +88,7 @@ static void ppro_setup_ctrs(struct op_msrs const * const msrs)
if ((counter_config[i].enabled) && (CTR_IS_RESERVED(msrs,i))) { if ((counter_config[i].enabled) && (CTR_IS_RESERVED(msrs,i))) {
reset_value[i] = counter_config[i].count; reset_value[i] = counter_config[i].count;
CTR_WRITE(counter_config[i].count, msrs, i); CTR_32BIT_WRITE(counter_config[i].count, msrs, i);
CTRL_READ(low, high, msrs, i); CTRL_READ(low, high, msrs, i);
CTRL_CLEAR(low); CTRL_CLEAR(low);
...@@ -116,7 +117,7 @@ static int ppro_check_ctrs(struct pt_regs * const regs, ...@@ -116,7 +117,7 @@ static int ppro_check_ctrs(struct pt_regs * const regs,
CTR_READ(low, high, msrs, i); CTR_READ(low, high, msrs, i);
if (CTR_OVERFLOWED(low)) { if (CTR_OVERFLOWED(low)) {
oprofile_add_sample(regs, i); oprofile_add_sample(regs, i);
CTR_WRITE(reset_value[i], msrs, i); CTR_32BIT_WRITE(reset_value[i], msrs, i);
} }
} }
......
obj-y := i386.o init.o obj-y := i386.o init.o
obj-$(CONFIG_PCI_BIOS) += pcbios.o obj-$(CONFIG_PCI_BIOS) += pcbios.o
obj-$(CONFIG_PCI_MMCONFIG) += mmconfig.o direct.o obj-$(CONFIG_PCI_MMCONFIG) += mmconfig.o direct.o mmconfig-shared.o
obj-$(CONFIG_PCI_DIRECT) += direct.o obj-$(CONFIG_PCI_DIRECT) += direct.o
pci-y := fixup.o pci-y := fixup.o
......
/*
* mmconfig-shared.c - Low-level direct PCI config space access via
* MMCONFIG - common code between i386 and x86-64.
*
* This code does:
* - known chipset handling
* - ACPI decoding and validation
*
* Per-architecture code takes care of the mappings and accesses
* themselves.
*/
#include <linux/pci.h>
#include <linux/init.h>
#include <linux/acpi.h>
#include <linux/bitmap.h>
#include <asm/e820.h>
#include "pci.h"
/* aperture is up to 256MB but BIOS may reserve less */
#define MMCONFIG_APER_MIN (2 * 1024*1024)
#define MMCONFIG_APER_MAX (256 * 1024*1024)
DECLARE_BITMAP(pci_mmcfg_fallback_slots, 32*PCI_MMCFG_MAX_CHECK_BUS);
/* K8 systems have some devices (typically in the builtin northbridge)
that are only accessible using type1
Normally this can be expressed in the MCFG by not listing them
and assigning suitable _SEGs, but this isn't implemented in some BIOS.
Instead try to discover all devices on bus 0 that are unreachable using MM
and fallback for them. */
static void __init unreachable_devices(void)
{
int i, bus;
/* Use the max bus number from ACPI here? */
for (bus = 0; bus < PCI_MMCFG_MAX_CHECK_BUS; bus++) {
for (i = 0; i < 32; i++) {
unsigned int devfn = PCI_DEVFN(i, 0);
u32 val1, val2;
pci_conf1_read(0, bus, devfn, 0, 4, &val1);
if (val1 == 0xffffffff)
continue;
if (pci_mmcfg_arch_reachable(0, bus, devfn)) {
raw_pci_ops->read(0, bus, devfn, 0, 4, &val2);
if (val1 == val2)
continue;
}
set_bit(i + 32 * bus, pci_mmcfg_fallback_slots);
printk(KERN_NOTICE "PCI: No mmconfig possible on device"
" %02x:%02x\n", bus, i);
}
}
}
static const char __init *pci_mmcfg_e7520(void)
{
u32 win;
pci_conf1_read(0, 0, PCI_DEVFN(0,0), 0xce, 2, &win);
pci_mmcfg_config_num = 1;
pci_mmcfg_config = kzalloc(sizeof(pci_mmcfg_config[0]), GFP_KERNEL);
if (!pci_mmcfg_config)
return NULL;
pci_mmcfg_config[0].address = (win & 0xf000) << 16;
pci_mmcfg_config[0].pci_segment = 0;
pci_mmcfg_config[0].start_bus_number = 0;
pci_mmcfg_config[0].end_bus_number = 255;
return "Intel Corporation E7520 Memory Controller Hub";
}
static const char __init *pci_mmcfg_intel_945(void)
{
u32 pciexbar, mask = 0, len = 0;
pci_mmcfg_config_num = 1;
pci_conf1_read(0, 0, PCI_DEVFN(0,0), 0x48, 4, &pciexbar);
/* Enable bit */
if (!(pciexbar & 1))
pci_mmcfg_config_num = 0;
/* Size bits */
switch ((pciexbar >> 1) & 3) {
case 0:
mask = 0xf0000000U;
len = 0x10000000U;
break;
case 1:
mask = 0xf8000000U;
len = 0x08000000U;
break;
case 2:
mask = 0xfc000000U;
len = 0x04000000U;
break;
default:
pci_mmcfg_config_num = 0;
}
/* Errata #2, things break when not aligned on a 256Mb boundary */
/* Can only happen in 64M/128M mode */
if ((pciexbar & mask) & 0x0fffffffU)
pci_mmcfg_config_num = 0;
if (pci_mmcfg_config_num) {
pci_mmcfg_config = kzalloc(sizeof(pci_mmcfg_config[0]), GFP_KERNEL);
if (!pci_mmcfg_config)
return NULL;
pci_mmcfg_config[0].address = pciexbar & mask;
pci_mmcfg_config[0].pci_segment = 0;
pci_mmcfg_config[0].start_bus_number = 0;
pci_mmcfg_config[0].end_bus_number = (len >> 20) - 1;
}
return "Intel Corporation 945G/GZ/P/PL Express Memory Controller Hub";
}
struct pci_mmcfg_hostbridge_probe {
u32 vendor;
u32 device;
const char *(*probe)(void);
};
static struct pci_mmcfg_hostbridge_probe pci_mmcfg_probes[] __initdata = {
{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7520_MCH, pci_mmcfg_e7520 },
{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82945G_HB, pci_mmcfg_intel_945 },
};
static int __init pci_mmcfg_check_hostbridge(void)
{
u32 l;
u16 vendor, device;
int i;
const char *name;
pci_conf1_read(0, 0, PCI_DEVFN(0,0), 0, 4, &l);
vendor = l & 0xffff;
device = (l >> 16) & 0xffff;
pci_mmcfg_config_num = 0;
pci_mmcfg_config = NULL;
name = NULL;
for (i = 0; !name && i < ARRAY_SIZE(pci_mmcfg_probes); i++) {
if (pci_mmcfg_probes[i].vendor == vendor &&
pci_mmcfg_probes[i].device == device)
name = pci_mmcfg_probes[i].probe();
}
if (name) {
printk(KERN_INFO "PCI: Found %s %s MMCONFIG support.\n",
name, pci_mmcfg_config_num ? "with" : "without");
}
return name != NULL;
}
static void __init pci_mmcfg_insert_resources(void)
{
#define PCI_MMCFG_RESOURCE_NAME_LEN 19
int i;
struct resource *res;
char *names;
unsigned num_buses;
res = kcalloc(PCI_MMCFG_RESOURCE_NAME_LEN + sizeof(*res),
pci_mmcfg_config_num, GFP_KERNEL);
if (!res) {
printk(KERN_ERR "PCI: Unable to allocate MMCONFIG resources\n");
return;
}
names = (void *)&res[pci_mmcfg_config_num];
for (i = 0; i < pci_mmcfg_config_num; i++, res++) {
struct acpi_mcfg_allocation *cfg = &pci_mmcfg_config[i];
num_buses = cfg->end_bus_number - cfg->start_bus_number + 1;
res->name = names;
snprintf(names, PCI_MMCFG_RESOURCE_NAME_LEN, "PCI MMCONFIG %u",
cfg->pci_segment);
res->start = cfg->address;
res->end = res->start + (num_buses << 20) - 1;
res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
insert_resource(&iomem_resource, res);
names += PCI_MMCFG_RESOURCE_NAME_LEN;
}
}
static void __init pci_mmcfg_reject_broken(int type)
{
typeof(pci_mmcfg_config[0]) *cfg;
if ((pci_mmcfg_config_num == 0) ||
(pci_mmcfg_config == NULL) ||
(pci_mmcfg_config[0].address == 0))
return;
cfg = &pci_mmcfg_config[0];
/*
* Handle more broken MCFG tables on Asus etc.
* They only contain a single entry for bus 0-0.
*/
if (pci_mmcfg_config_num == 1 &&
cfg->pci_segment == 0 &&
(cfg->start_bus_number | cfg->end_bus_number) == 0) {
printk(KERN_ERR "PCI: start and end of bus number is 0. "
"Rejected as broken MCFG.\n");
goto reject;
}
/*
* Only do this check when type 1 works. If it doesn't work
* assume we run on a Mac and always use MCFG
*/
if (type == 1 && !e820_all_mapped(cfg->address,
cfg->address + MMCONFIG_APER_MIN,
E820_RESERVED)) {
printk(KERN_ERR "PCI: BIOS Bug: MCFG area at %Lx is not"
" E820-reserved\n", cfg->address);
goto reject;
}
return;
reject:
printk(KERN_ERR "PCI: Not using MMCONFIG.\n");
kfree(pci_mmcfg_config);
pci_mmcfg_config = NULL;
pci_mmcfg_config_num = 0;
}
void __init pci_mmcfg_init(int type)
{
int known_bridge = 0;
if ((pci_probe & PCI_PROBE_MMCONF) == 0)
return;
if (type == 1 && pci_mmcfg_check_hostbridge())
known_bridge = 1;
if (!known_bridge) {
acpi_table_parse(ACPI_SIG_MCFG, acpi_parse_mcfg);
pci_mmcfg_reject_broken(type);
}
if ((pci_mmcfg_config_num == 0) ||
(pci_mmcfg_config == NULL) ||
(pci_mmcfg_config[0].address == 0))
return;
if (pci_mmcfg_arch_init()) {
if (type == 1)
unreachable_devices();
if (known_bridge)
pci_mmcfg_insert_resources();
pci_probe = (pci_probe & ~PCI_PROBE_MASK) | PCI_PROBE_MMCONF;
}
}
This diff is collapsed.
...@@ -94,3 +94,13 @@ extern void pci_pcbios_init(void); ...@@ -94,3 +94,13 @@ extern void pci_pcbios_init(void);
extern void pci_mmcfg_init(int type); extern void pci_mmcfg_init(int type);
extern void pcibios_sort(void); extern void pcibios_sort(void);
/* pci-mmconfig.c */
/* Verify the first 16 busses. We assume that systems with more busses
get MCFG right. */
#define PCI_MMCFG_MAX_CHECK_BUS 16
extern DECLARE_BITMAP(pci_mmcfg_fallback_slots, 32*PCI_MMCFG_MAX_CHECK_BUS);
extern int __init pci_mmcfg_arch_reachable(unsigned int seg, unsigned int bus,
unsigned int devfn);
extern int __init pci_mmcfg_arch_init(void);
...@@ -152,18 +152,18 @@ config MPSC ...@@ -152,18 +152,18 @@ config MPSC
Optimize for Intel Pentium 4 and older Nocona/Dempsey Xeon CPUs Optimize for Intel Pentium 4 and older Nocona/Dempsey Xeon CPUs
with Intel Extended Memory 64 Technology(EM64T). For details see with Intel Extended Memory 64 Technology(EM64T). For details see
<http://www.intel.com/technology/64bitextensions/>. <http://www.intel.com/technology/64bitextensions/>.
Note the the latest Xeons (Xeon 51xx and 53xx) are not based on the Note that the latest Xeons (Xeon 51xx and 53xx) are not based on the
Netburst core and shouldn't use this option. You can distingush them Netburst core and shouldn't use this option. You can distinguish them
using the cpu family field using the cpu family field
in /proc/cpuinfo. Family 15 is a older Xeon, Family 6 a newer one in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one
(this rule only applies to system that support EM64T) (this rule only applies to systems that support EM64T)
config MCORE2 config MCORE2
bool "Intel Core2 / newer Xeon" bool "Intel Core2 / newer Xeon"
help help
Optimize for Intel Core2 and newer Xeons (51xx) Optimize for Intel Core2 and newer Xeons (51xx)
You can distingush the newer Xeons from the older ones using You can distinguish the newer Xeons from the older ones using
the cpu family field in /proc/cpuinfo. 15 is a older Xeon the cpu family field in /proc/cpuinfo. 15 is an older Xeon
(use CONFIG_MPSC then), 6 is a newer one. This rule only (use CONFIG_MPSC then), 6 is a newer one. This rule only
applies to CPUs that support EM64T. applies to CPUs that support EM64T.
...@@ -458,8 +458,8 @@ config IOMMU ...@@ -458,8 +458,8 @@ config IOMMU
on systems with more than 3GB. This is usually needed for USB, on systems with more than 3GB. This is usually needed for USB,
sound, many IDE/SATA chipsets and some other devices. sound, many IDE/SATA chipsets and some other devices.
Provides a driver for the AMD Athlon64/Opteron/Turion/Sempron GART Provides a driver for the AMD Athlon64/Opteron/Turion/Sempron GART
based IOMMU and a software bounce buffer based IOMMU used on Intel based hardware IOMMU and a software bounce buffer based IOMMU used
systems and as fallback. on Intel systems and as fallback.
The code is only active when needed (enough memory and limited The code is only active when needed (enough memory and limited
device) unless CONFIG_IOMMU_DEBUG or iommu=force is specified device) unless CONFIG_IOMMU_DEBUG or iommu=force is specified
too. too.
...@@ -496,6 +496,12 @@ config CALGARY_IOMMU_ENABLED_BY_DEFAULT ...@@ -496,6 +496,12 @@ config CALGARY_IOMMU_ENABLED_BY_DEFAULT
# need this always selected by IOMMU for the VIA workaround # need this always selected by IOMMU for the VIA workaround
config SWIOTLB config SWIOTLB
bool bool
help
Support for software bounce buffers used on x86-64 systems
which don't have a hardware IOMMU (e.g. the current generation
of Intel's x86-64 CPUs). Using this PCI devices which can only
access 32-bits of memory can be used on systems with more than
3 GB of memory. If unsure, say Y.
config X86_MCE config X86_MCE
bool "Machine check support" if EMBEDDED bool "Machine check support" if EMBEDDED
......
This diff is collapsed.
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#include <linux/stddef.h> #include <linux/stddef.h>
#include <linux/personality.h> #include <linux/personality.h>
#include <linux/compat.h> #include <linux/compat.h>
#include <linux/binfmts.h>
#include <asm/ucontext.h> #include <asm/ucontext.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/i387.h> #include <asm/i387.h>
...@@ -449,7 +450,11 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka, ...@@ -449,7 +450,11 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
/* Return stub is in 32bit vsyscall page */ /* Return stub is in 32bit vsyscall page */
{ {
void __user *restorer = VSYSCALL32_SIGRETURN; void __user *restorer;
if (current->binfmt->hasvdso)
restorer = VSYSCALL32_SIGRETURN;
else
restorer = (void *)&frame->retcode;
if (ka->sa.sa_flags & SA_RESTORER) if (ka->sa.sa_flags & SA_RESTORER)
restorer = ka->sa.sa_restorer; restorer = ka->sa.sa_restorer;
err |= __put_user(ptr_to_compat(restorer), &frame->pretcode); err |= __put_user(ptr_to_compat(restorer), &frame->pretcode);
...@@ -495,7 +500,7 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka, ...@@ -495,7 +500,7 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
ptrace_notify(SIGTRAP); ptrace_notify(SIGTRAP);
#if DEBUG_SIG #if DEBUG_SIG
printk("SIG deliver (%s:%d): sp=%p pc=%p ra=%p\n", printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%u\n",
current->comm, current->pid, frame, regs->rip, frame->pretcode); current->comm, current->pid, frame, regs->rip, frame->pretcode);
#endif #endif
...@@ -601,7 +606,7 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, ...@@ -601,7 +606,7 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
ptrace_notify(SIGTRAP); ptrace_notify(SIGTRAP);
#if DEBUG_SIG #if DEBUG_SIG
printk("SIG deliver (%s:%d): sp=%p pc=%p ra=%p\n", printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%u\n",
current->comm, current->pid, frame, regs->rip, frame->pretcode); current->comm, current->pid, frame, regs->rip, frame->pretcode);
#endif #endif
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment