Commit 414f827c authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://one.firstfloor.org/home/andi/git/linux-2.6

* 'for-linus' of git://one.firstfloor.org/home/andi/git/linux-2.6: (94 commits)
  [PATCH] x86-64: Remove mk_pte_phys()
  [PATCH] i386: Fix broken CONFIG_COMPAT_VDSO on i386
  [PATCH] i386: fix 32-bit ioctls on x64_32
  [PATCH] x86: Unify pcspeaker platform device code between i386/x86-64
  [PATCH] i386: Remove extern declaration from mm/discontig.c, put in header.
  [PATCH] i386: Rename cpu_gdt_descr and remove extern declaration from smpboot.c
  [PATCH] i386: Move mce_disabled to asm/mce.h
  [PATCH] i386: paravirt unhandled fallthrough
  [PATCH] x86_64: Wire up compat epoll_pwait
  [PATCH] x86: Don't require the vDSO for handling a.out signals
  [PATCH] i386: Fix Cyrix MediaGX detection
  [PATCH] i386: Fix warning in cpu initialization
  [PATCH] i386: Fix warning in microcode.c
  [PATCH] x86: Enable NMI watchdog for AMD Family 0x10 CPUs
  [PATCH] x86: Add new CPUID bits for AMD Family 10 CPUs in /proc/cpuinfo
  [PATCH] i386: Remove fastcall in paravirt.[ch]
  [PATCH] x86-64: Fix wrong gcc check in bitops.h
  [PATCH] x86-64: survive having no irq mapping for a vector
  [PATCH] i386: geode configuration fixes
  [PATCH] i386: add option to show more code in oops reports
  ...
parents 86a71dbd 126b1922
......@@ -104,6 +104,9 @@ loader, and have no meaning to the kernel directly.
Do not modify the syntax of boot loader parameters without extreme
need or coordination with <Documentation/i386/boot.txt>.
There are also arch-specific kernel-parameters not documented here.
See for example <Documentation/x86_64/boot-options.txt>.
Note that ALL kernel parameters listed below are CASE SENSITIVE, and that
a trailing = on the name of any parameter states that that parameter will
be entered as an environment variable, whereas its absence indicates that
......@@ -361,6 +364,11 @@ and is between 256 and 4096 characters. It is defined in the file
clocksource is not available, it defaults to PIT.
Format: { pit | tsc | cyclone | pmtmr }
code_bytes [IA32] How many bytes of object code to print in an
oops report.
Range: 0 - 8192
Default: 64
disable_8254_timer
enable_8254_timer
[IA32/X86_64] Disable/Enable interrupt 0 timer routing
......
......@@ -180,40 +180,81 @@ PCI
pci=lastbus=NUMBER Scan upto NUMBER busses, no matter what the mptable says.
pci=noacpi Don't use ACPI to set up PCI interrupt routing.
IOMMU
iommu=[size][,noagp][,off][,force][,noforce][,leak][,memaper[=order]][,merge]
[,forcesac][,fullflush][,nomerge][,noaperture][,calgary]
size set size of iommu (in bytes)
noagp don't initialize the AGP driver and use full aperture.
off don't use the IOMMU
leak turn on simple iommu leak tracing (only when CONFIG_IOMMU_LEAK is on)
memaper[=order] allocate an own aperture over RAM with size 32MB^order.
noforce don't force IOMMU usage. Default.
force Force IOMMU.
merge Do SG merging. Implies force (experimental)
nomerge Don't do SG merging.
forcesac For SAC mode for masks <40bits (experimental)
fullflush Flush IOMMU on each allocation (default)
nofullflush Don't use IOMMU fullflush
allowed overwrite iommu off workarounds for specific chipsets.
soft Use software bounce buffering (default for Intel machines)
noaperture Don't touch the aperture for AGP.
allowdac Allow DMA >4GB
When off all DMA over >4GB is forced through an IOMMU or bounce
buffering.
nodac Forbid DMA >4GB
panic Always panic when IOMMU overflows
IOMMU (input/output memory management unit)
Currently four x86-64 PCI-DMA mapping implementations exist:
1. <arch/x86_64/kernel/pci-nommu.c>: use no hardware/software IOMMU at all
(e.g. because you have < 3 GB memory).
Kernel boot message: "PCI-DMA: Disabling IOMMU"
2. <arch/x86_64/kernel/pci-gart.c>: AMD GART based hardware IOMMU.
Kernel boot message: "PCI-DMA: using GART IOMMU"
3. <arch/x86_64/kernel/pci-swiotlb.c> : Software IOMMU implementation. Used
e.g. if there is no hardware IOMMU in the system and it is need because
you have >3GB memory or told the kernel to us it (iommu=soft))
Kernel boot message: "PCI-DMA: Using software bounce buffering
for IO (SWIOTLB)"
4. <arch/x86_64/pci-calgary.c> : IBM Calgary hardware IOMMU. Used in IBM
pSeries and xSeries servers. This hardware IOMMU supports DMA address
mapping with memory protection, etc.
Kernel boot message: "PCI-DMA: Using Calgary IOMMU"
iommu=[<size>][,noagp][,off][,force][,noforce][,leak[=<nr_of_leak_pages>]
[,memaper[=<order>]][,merge][,forcesac][,fullflush][,nomerge]
[,noaperture][,calgary]
General iommu options:
off Don't initialize and use any kind of IOMMU.
noforce Don't force hardware IOMMU usage when it is not needed.
(default).
force Force the use of the hardware IOMMU even when it is
not actually needed (e.g. because < 3 GB memory).
soft Use software bounce buffering (SWIOTLB) (default for
Intel machines). This can be used to prevent the usage
of an available hardware IOMMU.
iommu options only relevant to the AMD GART hardware IOMMU:
<size> Set the size of the remapping area in bytes.
allowed Overwrite iommu off workarounds for specific chipsets.
fullflush Flush IOMMU on each allocation (default).
nofullflush Don't use IOMMU fullflush.
leak Turn on simple iommu leak tracing (only when
CONFIG_IOMMU_LEAK is on). Default number of leak pages
is 20.
memaper[=<order>] Allocate an own aperture over RAM with size 32MB<<order.
(default: order=1, i.e. 64MB)
merge Do scatter-gather (SG) merging. Implies "force"
(experimental).
nomerge Don't do scatter-gather (SG) merging.
noaperture Ask the IOMMU not to touch the aperture for AGP.
forcesac Force single-address cycle (SAC) mode for masks <40bits
(experimental).
noagp Don't initialize the AGP driver and use full aperture.
allowdac Allow double-address cycle (DAC) mode, i.e. DMA >4GB.
DAC is used with 32-bit PCI to push a 64-bit address in
two cycles. When off all DMA over >4GB is forced through
an IOMMU or software bounce buffering.
nodac Forbid DAC mode, i.e. DMA >4GB.
panic Always panic when IOMMU overflows.
calgary Use the Calgary IOMMU if it is available
swiotlb=pages[,force]
pages Prereserve that many 128K pages for the software IO bounce buffering.
iommu options only relevant to the software bounce buffering (SWIOTLB) IOMMU
implementation:
swiotlb=<pages>[,force]
<pages> Prereserve that many 128K pages for the software IO
bounce buffering.
force Force all IO through the software TLB.
Settings for the IBM Calgary hardware IOMMU currently found in IBM
pSeries and xSeries machines:
calgary=[64k,128k,256k,512k,1M,2M,4M,8M]
calgary=[translate_empty_slots]
calgary=[disable=<PCI bus number>]
panic Always panic when IOMMU overflows
64k,...,8M - Set the size of each PCI slot's translation table
when using the Calgary IOMMU. This is the size of the translation
......@@ -239,7 +280,7 @@ Debugging
This will also cause panics on machine check exceptions.
Useful together with panic=30 to trigger a reboot.
kstack=N Print that many words from the kernel stack in oops dumps.
kstack=N Print N words from the kernel stack in oops dumps.
pagefaulttrace Dump all page faults. Only useful for extreme debugging
and will create a lot of output.
......@@ -251,15 +292,8 @@ Debugging
newfallback: use new unwinder but fall back to old if it gets
stuck (default)
call_trace=[old|both|newfallback|new]
old: use old inexact backtracer
new: use new exact dwarf2 unwinder
both: print entries from both
newfallback: use new unwinder but fall back to old if it gets
stuck (default)
Misc
Miscellaneous
noreplacement Don't replace instructions with more appropriate ones
for the CPU. This may be useful on asymmetric MP systems
where some CPU have less capabilities than the others.
where some CPUs have less capabilities than others.
......@@ -2,7 +2,7 @@ Firmware support for CPU hotplug under Linux/x86-64
---------------------------------------------------
Linux/x86-64 supports CPU hotplug now. For various reasons Linux wants to
know in advance boot time the maximum number of CPUs that could be plugged
know in advance of boot time the maximum number of CPUs that could be plugged
into the system. ACPI 3.0 currently has no official way to supply
this information from the firmware to the operating system.
......
......@@ -9,9 +9,9 @@ zombie. While the thread is in user space the kernel stack is empty
except for the thread_info structure at the bottom.
In addition to the per thread stacks, there are specialized stacks
associated with each cpu. These stacks are only used while the kernel
is in control on that cpu, when a cpu returns to user space the
specialized stacks contain no useful data. The main cpu stacks is
associated with each CPU. These stacks are only used while the kernel
is in control on that CPU; when a CPU returns to user space the
specialized stacks contain no useful data. The main CPU stacks are:
* Interrupt stack. IRQSTACKSIZE
......@@ -32,17 +32,17 @@ x86_64 also has a feature which is not available on i386, the ability
to automatically switch to a new stack for designated events such as
double fault or NMI, which makes it easier to handle these unusual
events on x86_64. This feature is called the Interrupt Stack Table
(IST). There can be up to 7 IST entries per cpu. The IST code is an
index into the Task State Segment (TSS), the IST entries in the TSS
point to dedicated stacks, each stack can be a different size.
(IST). There can be up to 7 IST entries per CPU. The IST code is an
index into the Task State Segment (TSS). The IST entries in the TSS
point to dedicated stacks; each stack can be a different size.
An IST is selected by an non-zero value in the IST field of an
An IST is selected by a non-zero value in the IST field of an
interrupt-gate descriptor. When an interrupt occurs and the hardware
loads such a descriptor, the hardware automatically sets the new stack
pointer based on the IST value, then invokes the interrupt handler. If
software wants to allow nested IST interrupts then the handler must
adjust the IST values on entry to and exit from the interrupt handler.
(this is occasionally done, e.g. for debug exceptions)
(This is occasionally done, e.g. for debug exceptions.)
Events with different IST codes (i.e. with different stacks) can be
nested. For example, a debug interrupt can safely be interrupted by an
......@@ -58,17 +58,17 @@ The currently assigned IST stacks are :-
Used for interrupt 12 - Stack Fault Exception (#SS).
This allows to recover from invalid stack segments. Rarely
This allows the CPU to recover from invalid stack segments. Rarely
happens.
* DOUBLEFAULT_STACK. EXCEPTION_STKSZ (PAGE_SIZE).
Used for interrupt 8 - Double Fault Exception (#DF).
Invoked when handling a exception causes another exception. Happens
when the kernel is very confused (e.g. kernel stack pointer corrupt)
Using a separate stack allows to recover from it well enough in many
cases to still output an oops.
Invoked when handling one exception causes another exception. Happens
when the kernel is very confused (e.g. kernel stack pointer corrupt).
Using a separate stack allows the kernel to recover from it well enough
in many cases to still output an oops.
* NMI_STACK. EXCEPTION_STKSZ (PAGE_SIZE).
......
Configurable sysfs parameters for the x86-64 machine check code.
Machine checks report internal hardware error conditions detected
by the CPU. Uncorrected errors typically cause a machine check
(often with panic), corrected ones cause a machine check log entry.
Machine checks are organized in banks (normally associated with
a hardware subsystem) and subevents in a bank. The exact meaning
of the banks and subevent is CPU specific.
mcelog knows how to decode them.
When you see the "Machine check errors logged" message in the system
log then mcelog should run to collect and decode machine check entries
from /dev/mcelog. Normally mcelog should be run regularly from a cronjob.
Each CPU has a directory in /sys/devices/system/machinecheck/machinecheckN
(N = CPU number)
The directory contains some configurable entries:
Entries:
bankNctl
(N bank number)
64bit Hex bitmask enabling/disabling specific subevents for bank N
When a bit in the bitmask is zero then the respective
subevent will not be reported.
By default all events are enabled.
Note that BIOS maintain another mask to disable specific events
per bank. This is not visible here
The following entries appear for each CPU, but they are truly shared
between all CPUs.
check_interval
How often to poll for corrected machine check errors, in seconds
(Note output is hexademical). Default 5 minutes.
tolerant
Tolerance level. When a machine check exception occurs for a non
corrected machine check the kernel can take different actions.
Since machine check exceptions can happen any time it is sometimes
risky for the kernel to kill a process because it defies
normal kernel locking rules. The tolerance level configures
how hard the kernel tries to recover even at some risk of deadlock.
0: always panic,
1: panic if deadlock possible,
2: try to avoid panic,
3: never panic or exit (for testing only)
Default: 1
Note this only makes a difference if the CPU allows recovery
from a machine check exception. Current x86 CPUs generally do not.
trigger
Program to run when a machine check event is detected.
This is an alternative to running mcelog regularly from cron
and allows to detect events faster.
TBD document entries for AMD threshold interrupt configuration
For more details about the x86 machine check architecture
see the Intel and AMD architecture manuals from their developer websites.
For more details about the architecture see
see http://one.firstfloor.org/~andi/mce.pdf
......@@ -3,26 +3,26 @@
Virtual memory map with 4 level page tables:
0000000000000000 - 00007fffffffffff (=47bits) user space, different per mm
0000000000000000 - 00007fffffffffff (=47 bits) user space, different per mm
hole caused by [48:63] sign extension
ffff800000000000 - ffff80ffffffffff (=40bits) guard hole
ffff810000000000 - ffffc0ffffffffff (=46bits) direct mapping of all phys. memory
ffffc10000000000 - ffffc1ffffffffff (=40bits) hole
ffffc20000000000 - ffffe1ffffffffff (=45bits) vmalloc/ioremap space
ffff800000000000 - ffff80ffffffffff (=40 bits) guard hole
ffff810000000000 - ffffc0ffffffffff (=46 bits) direct mapping of all phys. memory
ffffc10000000000 - ffffc1ffffffffff (=40 bits) hole
ffffc20000000000 - ffffe1ffffffffff (=45 bits) vmalloc/ioremap space
... unused hole ...
ffffffff80000000 - ffffffff82800000 (=40MB) kernel text mapping, from phys 0
ffffffff80000000 - ffffffff82800000 (=40 MB) kernel text mapping, from phys 0
... unused hole ...
ffffffff88000000 - fffffffffff00000 (=1919MB) module mapping space
ffffffff88000000 - fffffffffff00000 (=1919 MB) module mapping space
The direct mapping covers all memory in the system upto the highest
The direct mapping covers all memory in the system up to the highest
memory address (this means in some cases it can also include PCI memory
holes)
holes).
vmalloc space is lazily synchronized into the different PML4 pages of
the processes using the page fault handler, with init_level4_pgt as
reference.
Current X86-64 implementations only support 40 bit of address space,
but we support upto 46bits. This expands into MBZ space in the page tables.
Current X86-64 implementations only support 40 bits of address space,
but we support up to 46 bits. This expands into MBZ space in the page tables.
-Andi Kleen, Jul 2004
......@@ -3779,6 +3779,7 @@ P: Andi Kleen
M: ak@suse.de
L: discuss@x86-64.org
W: http://www.x86-64.org
T: quilt ftp://ftp.firstfloor.org/pub/ak/x86_64/quilt-current
S: Maintained
YAM DRIVER FOR AX.25
......
......@@ -203,6 +203,15 @@ config PARAVIRT
However, when run without a hypervisor the kernel is
theoretically slower. If in doubt, say N.
config VMI
bool "VMI Paravirt-ops support"
depends on PARAVIRT
default y
help
VMI provides a paravirtualized interface to multiple hypervisors
include VMware ESX server and Xen by connecting to a ROM module
provided by the hypervisor.
config ACPI_SRAT
bool
default y
......@@ -1263,3 +1272,12 @@ config X86_TRAMPOLINE
config KTIME_SCALAR
bool
default y
config NO_IDLE_HZ
bool
depends on PARAVIRT
default y
help
Switches the regular HZ timer off when the system is going idle.
This helps a hypervisor detect that the Linux system is idle,
reducing the overhead of idle systems.
......@@ -226,11 +226,6 @@ config X86_CMPXCHG
depends on !M386
default y
config X86_XADD
bool
depends on !M386
default y
config X86_L1_CACHE_SHIFT
int
default "7" if MPENTIUM4 || X86_GENERIC
......
......@@ -87,7 +87,7 @@ config DOUBLEFAULT
config DEBUG_PARAVIRT
bool "Enable some paravirtualization debugging"
default y
default n
depends on PARAVIRT && DEBUG_KERNEL
help
Currently deliberately clobbers regs which are allowed to be
......
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.20-rc3
# Fri Jan 5 11:54:46 2007
# Linux kernel version: 2.6.20-git8
# Tue Feb 13 11:25:18 2007
#
CONFIG_X86_32=y
CONFIG_GENERIC_TIME=y
......@@ -10,6 +10,7 @@ CONFIG_STACKTRACE_SUPPORT=y
CONFIG_SEMAPHORE_SLEEPERS=y
CONFIG_X86=y
CONFIG_MMU=y
CONFIG_ZONE_DMA=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_IOMAP=y
CONFIG_GENERIC_BUG=y
......@@ -139,7 +140,6 @@ CONFIG_MPENTIUMIII=y
# CONFIG_MVIAC3_2 is not set
CONFIG_X86_GENERIC=y
CONFIG_X86_CMPXCHG=y
CONFIG_X86_XADD=y
CONFIG_X86_L1_CACHE_SHIFT=7
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
# CONFIG_ARCH_HAS_ILOG2_U32 is not set
......@@ -198,6 +198,7 @@ CONFIG_FLAT_NODE_MEM_MAP=y
# CONFIG_SPARSEMEM_STATIC is not set
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_RESOURCES_64BIT=y
CONFIG_ZONE_DMA_FLAG=1
# CONFIG_HIGHPTE is not set
# CONFIG_MATH_EMULATION is not set
CONFIG_MTRR=y
......@@ -211,6 +212,7 @@ CONFIG_HZ_250=y
CONFIG_HZ=250
# CONFIG_KEXEC is not set
# CONFIG_CRASH_DUMP is not set
CONFIG_PHYSICAL_START=0x100000
# CONFIG_RELOCATABLE is not set
CONFIG_PHYSICAL_ALIGN=0x100000
# CONFIG_HOTPLUG_CPU is not set
......@@ -229,13 +231,14 @@ CONFIG_PM_SYSFS_DEPRECATED=y
# ACPI (Advanced Configuration and Power Interface) Support
#
CONFIG_ACPI=y
CONFIG_ACPI_PROCFS=y
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
# CONFIG_ACPI_VIDEO is not set
# CONFIG_ACPI_HOTKEY is not set
CONFIG_ACPI_FAN=y
# CONFIG_ACPI_DOCK is not set
# CONFIG_ACPI_BAY is not set
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_THERMAL=y
# CONFIG_ACPI_ASUS is not set
......@@ -306,7 +309,6 @@ CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
# CONFIG_PCIEPORTBUS is not set
CONFIG_PCI_MSI=y
# CONFIG_PCI_MULTITHREAD_PROBE is not set
# CONFIG_PCI_DEBUG is not set
# CONFIG_HT_IRQ is not set
CONFIG_ISA_DMA_API=y
......@@ -347,6 +349,7 @@ CONFIG_UNIX=y
CONFIG_XFRM=y
# CONFIG_XFRM_USER is not set
# CONFIG_XFRM_SUB_POLICY is not set
# CONFIG_XFRM_MIGRATE is not set
# CONFIG_NET_KEY is not set
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
......@@ -446,6 +449,7 @@ CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
# CONFIG_DEBUG_DRIVER is not set
# CONFIG_DEBUG_DEVRES is not set
# CONFIG_SYS_HYPERVISOR is not set
#
......@@ -466,8 +470,7 @@ CONFIG_FW_LOADER=y
#
# Plug and Play support
#
CONFIG_PNP=y
CONFIG_PNPACPI=y
# CONFIG_PNP is not set
#
# Block devices
......@@ -515,6 +518,7 @@ CONFIG_BLK_DEV_IDECD=y
# CONFIG_BLK_DEV_IDETAPE is not set
# CONFIG_BLK_DEV_IDEFLOPPY is not set
# CONFIG_BLK_DEV_IDESCSI is not set
CONFIG_BLK_DEV_IDEACPI=y
# CONFIG_IDE_TASK_IOCTL is not set
#
......@@ -547,6 +551,7 @@ CONFIG_BLK_DEV_AMD74XX=y
# CONFIG_BLK_DEV_JMICRON is not set
# CONFIG_BLK_DEV_SC1200 is not set
CONFIG_BLK_DEV_PIIX=y
# CONFIG_BLK_DEV_IT8213 is not set
# CONFIG_BLK_DEV_IT821X is not set
# CONFIG_BLK_DEV_NS87415 is not set
# CONFIG_BLK_DEV_PDC202XX_OLD is not set
......@@ -557,6 +562,7 @@ CONFIG_BLK_DEV_PIIX=y
# CONFIG_BLK_DEV_SLC90E66 is not set
# CONFIG_BLK_DEV_TRM290 is not set
# CONFIG_BLK_DEV_VIA82CXXX is not set
# CONFIG_BLK_DEV_TC86C001 is not set
# CONFIG_IDE_ARM is not set
CONFIG_BLK_DEV_IDEDMA=y
# CONFIG_IDEDMA_IVB is not set
......@@ -655,6 +661,7 @@ CONFIG_AIC79XX_DEBUG_MASK=0
# Serial ATA (prod) and Parallel ATA (experimental) drivers
#
CONFIG_ATA=y
# CONFIG_ATA_NONSTANDARD is not set
CONFIG_SATA_AHCI=y
CONFIG_SATA_SVW=y
CONFIG_ATA_PIIX=y
......@@ -670,6 +677,7 @@ CONFIG_SATA_SIL=y
# CONFIG_SATA_ULI is not set
CONFIG_SATA_VIA=y
# CONFIG_SATA_VITESSE is not set
# CONFIG_SATA_INIC162X is not set
CONFIG_SATA_INTEL_COMBINED=y
# CONFIG_PATA_ALI is not set
# CONFIG_PATA_AMD is not set
......@@ -687,6 +695,7 @@ CONFIG_SATA_INTEL_COMBINED=y
# CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT821X is not set
# CONFIG_PATA_IT8213 is not set
# CONFIG_PATA_JMICRON is not set
# CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_MARVELL is not set
......@@ -739,9 +748,7 @@ CONFIG_IEEE1394=y
# Subsystem Options
#
# CONFIG_IEEE1394_VERBOSEDEBUG is not set
# CONFIG_IEEE1394_OUI_DB is not set
# CONFIG_IEEE1394_EXTRA_CONFIG_ROMS is not set
# CONFIG_IEEE1394_EXPORT_FULL_API is not set
#
# Device Drivers
......@@ -766,6 +773,11 @@ CONFIG_IEEE1394_RAWIO=y
#
# CONFIG_I2O is not set
#
# Macintosh device drivers
#
# CONFIG_MAC_EMUMOUSEBTN is not set
#
# Network device support
#
......@@ -833,6 +845,7 @@ CONFIG_8139TOO=y
# CONFIG_SUNDANCE is not set
# CONFIG_TLAN is not set
# CONFIG_VIA_RHINE is not set
# CONFIG_SC92031 is not set
#
# Ethernet (1000 Mbit)
......@@ -855,11 +868,13 @@ CONFIG_SKY2=y
CONFIG_TIGON3=y
CONFIG_BNX2=y
# CONFIG_QLA3XXX is not set
# CONFIG_ATL1 is not set
#
# Ethernet (10000 Mbit)
#
# CONFIG_CHELSIO_T1 is not set
# CONFIG_CHELSIO_T3 is not set
# CONFIG_IXGB is not set
# CONFIG_S2IO is not set
# CONFIG_MYRI10GE is not set
......@@ -1090,6 +1105,7 @@ CONFIG_SOUND=y
# Open Sound System
#
CONFIG_SOUND_PRIME=y
CONFIG_OBSOLETE_OSS=y
# CONFIG_SOUND_BT878 is not set
# CONFIG_SOUND_ES1371 is not set
CONFIG_SOUND_ICH=y
......@@ -1103,6 +1119,7 @@ CONFIG_SOUND_ICH=y
# HID Devices
#
CONFIG_HID=y
# CONFIG_HID_DEBUG is not set
#
# USB support
......@@ -1117,10 +1134,8 @@ CONFIG_USB=y
# Miscellaneous USB options
#
CONFIG_USB_DEVICEFS=y
# CONFIG_USB_BANDWIDTH is not set
# CONFIG_USB_DYNAMIC_MINORS is not set
# CONFIG_USB_SUSPEND is not set
# CONFIG_USB_MULTITHREAD_PROBE is not set
# CONFIG_USB_OTG is not set
#
......@@ -1130,9 +1145,11 @@ CONFIG_USB_EHCI_HCD=y
# CONFIG_USB_EHCI_SPLIT_ISO is not set
# CONFIG_USB_EHCI_ROOT_HUB_TT is not set
# CONFIG_USB_EHCI_TT_NEWSCHED is not set
# CONFIG_USB_EHCI_BIG_ENDIAN_MMIO is not set
# CONFIG_USB_ISP116X_HCD is not set
CONFIG_USB_OHCI_HCD=y
# CONFIG_USB_OHCI_BIG_ENDIAN is not set
# CONFIG_USB_OHCI_BIG_ENDIAN_DESC is not set
# CONFIG_USB_OHCI_BIG_ENDIAN_MMIO is not set
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_UHCI_HCD=y
# CONFIG_USB_SL811_HCD is not set
......@@ -1183,6 +1200,7 @@ CONFIG_USB_HID=y
# CONFIG_USB_ATI_REMOTE2 is not set
# CONFIG_USB_KEYSPAN_REMOTE is not set
# CONFIG_USB_APPLETOUCH is not set
# CONFIG_USB_GTCO is not set
#
# USB Imaging devices
......@@ -1287,6 +1305,10 @@ CONFIG_USB_MON=y
# DMA Devices
#
#
# Auxiliary Display support
#
#
# Virtualization
#
......@@ -1480,6 +1502,7 @@ CONFIG_UNUSED_SYMBOLS=y
# CONFIG_DEBUG_FS is not set
# CONFIG_HEADERS_CHECK is not set
CONFIG_DEBUG_KERNEL=y
# CONFIG_DEBUG_SHIRQ is not set
CONFIG_LOG_BUF_SHIFT=18
CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_SCHEDSTATS is not set
......@@ -1488,7 +1511,6 @@ CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_RT_MUTEX_TESTER is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_MUTEXES is not set
# CONFIG_DEBUG_RWSEMS is not set
# CONFIG_DEBUG_LOCK_ALLOC is not set
# CONFIG_PROVE_LOCKING is not set
# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
......@@ -1533,7 +1555,8 @@ CONFIG_CRC32=y
# CONFIG_LIBCRC32C is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_PLIST=y
CONFIG_IOMAP_COPY=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_GENERIC_HARDIRQS=y
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_PENDING_IRQ=y
......
......@@ -40,8 +40,9 @@ obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
obj-$(CONFIG_HPET_TIMER) += hpet.o
obj-$(CONFIG_K8_NB) += k8.o
# Make sure this is linked after any other paravirt_ops structs: see head.S
obj-$(CONFIG_VMI) += vmi.o vmitime.o
obj-$(CONFIG_PARAVIRT) += paravirt.o
obj-y += pcspeaker.o
EXTRA_AFLAGS := -traditional
......
......@@ -36,6 +36,7 @@
#include <asm/hpet.h>
#include <asm/i8253.h>
#include <asm/nmi.h>
#include <asm/idle.h>
#include <mach_apic.h>
#include <mach_apicdef.h>
......@@ -1255,6 +1256,7 @@ fastcall void smp_apic_timer_interrupt(struct pt_regs *regs)
* Besides, if we don't timer interrupts ignore the global
* interrupt lock, which is the WrongThing (tm) to do.
*/
exit_idle();
irq_enter();
smp_local_timer_interrupt();
irq_exit();
......@@ -1305,6 +1307,7 @@ fastcall void smp_spurious_interrupt(struct pt_regs *regs)
{
unsigned long v;
exit_idle();
irq_enter();
/*
* Check if this really is a spurious interrupt and ACK it
......@@ -1329,6 +1332,7 @@ fastcall void smp_error_interrupt(struct pt_regs *regs)
{
unsigned long v, v1;
exit_idle();
irq_enter();
/* First tickle the hardware, only then report what went on. -- REW */
v = apic_read(APIC_ESR);
......@@ -1395,7 +1399,7 @@ int __init APIC_init_uniprocessor (void)
if (!skip_ioapic_setup && nr_ioapics)
setup_IO_APIC();
#endif
setup_boot_APIC_clock();
setup_boot_clock();
return 0;
}
......
......@@ -211,6 +211,7 @@
#include <linux/slab.h>
#include <linux/stat.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include <linux/miscdevice.h>
#include <linux/apm_bios.h>
#include <linux/init.h>
......@@ -1636,9 +1637,8 @@ static int do_open(struct inode * inode, struct file * filp)
return 0;
}
static int apm_get_info(char *buf, char **start, off_t fpos, int length)
static int proc_apm_show(struct seq_file *m, void *v)
{
char * p;
unsigned short bx;
unsigned short cx;
unsigned short dx;
......@@ -1650,8 +1650,6 @@ static int apm_get_info(char *buf, char **start, off_t fpos, int length)
int time_units = -1;
char *units = "?";
p = buf;
if ((num_online_cpus() == 1) &&
!(error = apm_get_power_status(&bx, &cx, &dx))) {
ac_line_status = (bx >> 8) & 0xff;
......@@ -1705,7 +1703,7 @@ static int apm_get_info(char *buf, char **start, off_t fpos, int length)
-1: Unknown
8) min = minutes; sec = seconds */
p += sprintf(p, "%s %d.%d 0x%02x 0x%02x 0x%02x 0x%02x %d%% %d %s\n",
seq_printf(m, "%s %d.%d 0x%02x 0x%02x 0x%02x 0x%02x %d%% %d %s\n",
driver_version,
(apm_info.bios.version >> 8) & 0xff,
apm_info.bios.version & 0xff,
......@@ -1716,10 +1714,22 @@ static int apm_get_info(char *buf, char **start, off_t fpos, int length)
percentage,
time_units,
units);
return 0;
}
return p - buf;
static int proc_apm_open(struct inode *inode, struct file *file)
{
return single_open(file, proc_apm_show, NULL);
}
static const struct file_operations apm_file_ops = {
.owner = THIS_MODULE,
.open = proc_apm_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int apm(void *unused)
{
unsigned short bx;
......@@ -2341,9 +2351,9 @@ static int __init apm_init(void)
set_base(gdt[APM_DS >> 3],
__va((unsigned long)apm_info.bios.dseg << 4));
apm_proc = create_proc_info_entry("apm", 0, NULL, apm_get_info);
apm_proc = create_proc_entry("apm", 0, NULL);
if (apm_proc)
apm_proc->owner = THIS_MODULE;
apm_proc->proc_fops = &apm_file_ops;
kapmd_task = kthread_create(apm, NULL, "kapmd");
if (IS_ERR(kapmd_task)) {
......
......@@ -72,7 +72,7 @@ void foo(void)
OFFSET(PT_EAX, pt_regs, eax);
OFFSET(PT_DS, pt_regs, xds);
OFFSET(PT_ES, pt_regs, xes);
OFFSET(PT_GS, pt_regs, xgs);
OFFSET(PT_FS, pt_regs, xfs);
OFFSET(PT_ORIG_EAX, pt_regs, orig_eax);
OFFSET(PT_EIP, pt_regs, eip);
OFFSET(PT_CS, pt_regs, xcs);
......
......@@ -605,7 +605,7 @@ void __init early_cpu_init(void)
struct pt_regs * __devinit idle_regs(struct pt_regs *regs)
{
memset(regs, 0, sizeof(struct pt_regs));
regs->xgs = __KERNEL_PDA;
regs->xfs = __KERNEL_PDA;
return regs;
}
......@@ -662,12 +662,12 @@ struct i386_pda boot_pda = {
.pcurrent = &init_task,
};
static inline void set_kernel_gs(void)
static inline void set_kernel_fs(void)
{
/* Set %gs for this CPU's PDA. Memory clobber is to create a
/* Set %fs for this CPU's PDA. Memory clobber is to create a
barrier with respect to any PDA operations, so the compiler
doesn't move any before here. */
asm volatile ("mov %0, %%gs" : : "r" (__KERNEL_PDA) : "memory");
asm volatile ("mov %0, %%fs" : : "r" (__KERNEL_PDA) : "memory");
}
/* Initialize the CPU's GDT and PDA. The boot CPU does this for
......@@ -718,7 +718,7 @@ void __cpuinit cpu_set_gdt(int cpu)
the boot CPU, this will transition from the boot gdt+pda to
the real ones). */
load_gdt(cpu_gdt_descr);
set_kernel_gs();
set_kernel_fs();
}
/* Common CPU init for both boot and secondary CPUs */
......@@ -764,8 +764,8 @@ static void __cpuinit _cpu_init(int cpu, struct task_struct *curr)
__set_tss_desc(cpu, GDT_ENTRY_DOUBLEFAULT_TSS, &doublefault_tss);
#endif
/* Clear %fs. */
asm volatile ("mov %0, %%fs" : : "r" (0));
/* Clear %gs. */
asm volatile ("mov %0, %%gs" : : "r" (0));
/* Clear all 6 debug registers: */
set_debugreg(0, 0);
......
......@@ -6,6 +6,7 @@
#include <asm/io.h>
#include <asm/processor.h>
#include <asm/timer.h>
#include <asm/pci-direct.h>
#include "cpu.h"
......@@ -161,19 +162,19 @@ static void __cpuinit set_cx86_inc(void)
static void __cpuinit geode_configure(void)
{
unsigned long flags;
u8 ccr3, ccr4;
u8 ccr3;
local_irq_save(flags);
/* Suspend on halt power saving and enable #SUSP pin */
setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x88);
ccr3 = getCx86(CX86_CCR3);
setCx86(CX86_CCR3, (ccr3 & 0x0f) | 0x10); /* Enable */
setCx86(CX86_CCR3, (ccr3 & 0x0f) | 0x10); /* enable MAPEN */
ccr4 = getCx86(CX86_CCR4);
ccr4 |= 0x38; /* FPU fast, DTE cache, Mem bypass */
setCx86(CX86_CCR3, ccr3);
/* FPU fast, DTE cache, Mem bypass */
setCx86(CX86_CCR4, getCx86(CX86_CCR4) | 0x38);
setCx86(CX86_CCR3, ccr3); /* disable MAPEN */
set_cx86_memwb();
set_cx86_reorder();
......@@ -183,14 +184,6 @@ static void __cpuinit geode_configure(void)
}
#ifdef CONFIG_PCI
static struct pci_device_id __cpuinitdata cyrix_55x0[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_5510) },
{ PCI_DEVICE(PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_5520) },
{ },
};
#endif
static void __cpuinit init_cyrix(struct cpuinfo_x86 *c)
{
unsigned char dir0, dir0_msn, dir0_lsn, dir1 = 0;
......@@ -258,6 +251,8 @@ static void __cpuinit init_cyrix(struct cpuinfo_x86 *c)
case 4: /* MediaGX/GXm or Geode GXM/GXLV/GX1 */
#ifdef CONFIG_PCI
{
u32 vendor, device;
/* It isn't really a PCI quirk directly, but the cure is the
same. The MediaGX has deep magic SMM stuff that handles the
SB emulation. It thows away the fifo on disable_dma() which
......@@ -273,22 +268,34 @@ static void __cpuinit init_cyrix(struct cpuinfo_x86 *c)
printk(KERN_INFO "Working around Cyrix MediaGX virtual DMA bugs.\n");
isa_dma_bridge_buggy = 2;
/* We do this before the PCI layer is running. However we
are safe here as we know the bridge must be a Cyrix
companion and must be present */
vendor = read_pci_config_16(0, 0, 0x12, PCI_VENDOR_ID);
device = read_pci_config_16(0, 0, 0x12, PCI_DEVICE_ID);
/*
* The 5510/5520 companion chips have a funky PIT.
*/
if (pci_dev_present(cyrix_55x0))
if (vendor == PCI_VENDOR_ID_CYRIX &&
(device == PCI_DEVICE_ID_CYRIX_5510 || device == PCI_DEVICE_ID_CYRIX_5520))
pit_latch_buggy = 1;
}
#endif
c->x86_cache_size=16; /* Yep 16K integrated cache thats it */
/* GXm supports extended cpuid levels 'ala' AMD */
if (c->cpuid_level == 2) {
/* Enable cxMMX extensions (GX1 Datasheet 54) */
setCx86(CX86_CCR7, getCx86(CX86_CCR7)|1);
setCx86(CX86_CCR7, getCx86(CX86_CCR7) | 1);
/* GXlv/GXm/GX1 */
if((dir1 >= 0x50 && dir1 <= 0x54) || dir1 >= 0x63)
/*
* GXm : 0x30 ... 0x5f GXm datasheet 51
* GXlv: 0x6x GXlv datasheet 54
* ? : 0x7x
* GX1 : 0x8x GX1 datasheet 56
*/
if((0x30 <= dir1 && dir1 <= 0x6f) || (0x80 <=dir1 && dir1 <= 0x8f))
geode_configure();
get_model_name(c); /* get CPU marketing name */
return;
......@@ -415,14 +422,13 @@ static void __cpuinit cyrix_identify(struct cpuinfo_x86 * c)
if (dir0 == 5 || dir0 == 3)
{
unsigned char ccr3, ccr4;
unsigned char ccr3;
unsigned long flags;
printk(KERN_INFO "Enabling CPUID on Cyrix processor.\n");
local_irq_save(flags);
ccr3 = getCx86(CX86_CCR3);
setCx86(CX86_CCR3, (ccr3 & 0x0f) | 0x10); /* enable MAPEN */
ccr4 = getCx86(CX86_CCR4);
setCx86(CX86_CCR4, ccr4 | 0x80); /* enable cpuid */
setCx86(CX86_CCR4, getCx86(CX86_CCR4) | 0x80); /* enable cpuid */
setCx86(CX86_CCR3, ccr3); /* disable MAPEN */
local_irq_restore(flags);
}
......
......@@ -12,6 +12,7 @@
#include <asm/processor.h>
#include <asm/system.h>
#include <asm/mce.h>
#include "mce.h"
......
#include <linux/init.h>
#include <asm/mce.h>
void amd_mcheck_init(struct cpuinfo_x86 *c);
void intel_p4_mcheck_init(struct cpuinfo_x86 *c);
......@@ -9,6 +10,5 @@ void winchip_mcheck_init(struct cpuinfo_x86 *c);
/* Call the installed machine check handler for this CPU setup. */
extern fastcall void (*machine_check_vector)(struct pt_regs *, long error_code);
extern int mce_disabled;
extern int nr_mce_banks;
......@@ -12,6 +12,7 @@
#include <asm/system.h>
#include <asm/msr.h>
#include <asm/apic.h>
#include <asm/idle.h>
#include <asm/therm_throt.h>
......@@ -59,6 +60,7 @@ static void (*vendor_thermal_interrupt)(struct pt_regs *regs) = unexpected_therm
fastcall void smp_thermal_interrupt(struct pt_regs *regs)
{
exit_idle();
irq_enter();
vendor_thermal_interrupt(regs);
irq_exit();
......
......@@ -211,6 +211,9 @@ mtrr_ioctl(struct file *file, unsigned int cmd, unsigned long __arg)
default:
return -ENOTTY;
case MTRRIOC_ADD_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_ADD_ENTRY:
#endif
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
err =
......@@ -218,21 +221,33 @@ mtrr_ioctl(struct file *file, unsigned int cmd, unsigned long __arg)
file, 0);
break;
case MTRRIOC_SET_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_SET_ENTRY:
#endif
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
err = mtrr_add(sentry.base, sentry.size, sentry.type, 0);
break;
case MTRRIOC_DEL_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_DEL_ENTRY:
#endif
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
err = mtrr_file_del(sentry.base, sentry.size, file, 0);
break;
case MTRRIOC_KILL_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_KILL_ENTRY:
#endif
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
err = mtrr_del(-1, sentry.base, sentry.size);
break;
case MTRRIOC_GET_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_GET_ENTRY:
#endif
if (gentry.regnum >= num_var_ranges)
return -EINVAL;
mtrr_if->get(gentry.regnum, &gentry.base, &size, &type);
......@@ -249,6 +264,9 @@ mtrr_ioctl(struct file *file, unsigned int cmd, unsigned long __arg)
break;
case MTRRIOC_ADD_PAGE_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_ADD_PAGE_ENTRY:
#endif
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
err =
......@@ -256,21 +274,33 @@ mtrr_ioctl(struct file *file, unsigned int cmd, unsigned long __arg)
file, 1);
break;
case MTRRIOC_SET_PAGE_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_SET_PAGE_ENTRY:
#endif
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
err = mtrr_add_page(sentry.base, sentry.size, sentry.type, 0);
break;
case MTRRIOC_DEL_PAGE_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_DEL_PAGE_ENTRY:
#endif
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
err = mtrr_file_del(sentry.base, sentry.size, file, 1);
break;
case MTRRIOC_KILL_PAGE_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_KILL_PAGE_ENTRY:
#endif
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
err = mtrr_del_page(-1, sentry.base, sentry.size);
break;
case MTRRIOC_GET_PAGE_ENTRY:
#ifdef CONFIG_COMPAT
case MTRRIOC32_GET_PAGE_ENTRY:
#endif
if (gentry.regnum >= num_var_ranges)
return -EINVAL;
mtrr_if->get(gentry.regnum, &gentry.base, &size, &type);
......
......@@ -50,7 +50,7 @@ u32 num_var_ranges = 0;
unsigned int *usage_table;
static DEFINE_MUTEX(mtrr_mutex);
u32 size_or_mask, size_and_mask;
u64 size_or_mask, size_and_mask;
static struct mtrr_ops * mtrr_ops[X86_VENDOR_NUM] = {};
......@@ -662,8 +662,8 @@ void __init mtrr_bp_init(void)
boot_cpu_data.x86_mask == 0x4))
phys_addr = 36;
size_or_mask = ~((1 << (phys_addr - PAGE_SHIFT)) - 1);
size_and_mask = ~size_or_mask & 0xfff00000;
size_or_mask = ~((1ULL << (phys_addr - PAGE_SHIFT)) - 1);
size_and_mask = ~size_or_mask & 0xfffff00000ULL;
} else if (boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR &&
boot_cpu_data.x86 == 6) {
/* VIA C* family have Intel style MTRRs, but
......
......@@ -84,7 +84,7 @@ void get_mtrr_state(void);
extern void set_mtrr_ops(struct mtrr_ops * ops);
extern u32 size_or_mask, size_and_mask;
extern u64 size_or_mask, size_and_mask;
extern struct mtrr_ops * mtrr_if;
#define is_cpu(vnd) (mtrr_if && mtrr_if->vendor == X86_VENDOR_##vnd)
......
......@@ -29,7 +29,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, "syscall", NULL, NULL, NULL, NULL,
NULL, NULL, NULL, "mp", "nx", NULL, "mmxext", NULL,
NULL, "fxsr_opt", "rdtscp", NULL, NULL, "lm", "3dnowext", "3dnow",
NULL, "fxsr_opt", "pdpe1gb", "rdtscp", NULL, "lm", "3dnowext", "3dnow",
/* Transmeta-defined */
"recovery", "longrun", NULL, "lrti", NULL, NULL, NULL, NULL,
......@@ -47,7 +47,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
/* Intel-defined (#2) */
"pni", NULL, NULL, "monitor", "ds_cpl", "vmx", "smx", "est",
"tm2", "ssse3", "cid", NULL, NULL, "cx16", "xtpr", NULL,
NULL, NULL, "dca", NULL, NULL, NULL, NULL, NULL,
NULL, NULL, "dca", NULL, NULL, NULL, NULL, "popcnt",
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
/* VIA/Cyrix/Centaur-defined */
......@@ -57,8 +57,9 @@ static int show_cpuinfo(struct seq_file *m, void *v)
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
/* AMD-defined (#2) */
"lahf_lm", "cmp_legacy", "svm", NULL, "cr8legacy", NULL, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
"lahf_lm", "cmp_legacy", "svm", "extapic", "cr8legacy", "abm",
"sse4a", "misalignsse",
"3dnowprefetch", "osvw", "ibs", NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
};
......@@ -69,8 +70,11 @@ static int show_cpuinfo(struct seq_file *m, void *v)
"ttp", /* thermal trip */
"tm",
"stc",
"100mhzsteps",
"hwpstate",
NULL,
/* nothing */ /* constant_tsc - moved to flags */
NULL, /* constant_tsc - moved to flags */
/* nothing */
};
struct cpuinfo_x86 *c = v;
int i, n = c - cpu_data;
......
......@@ -9,7 +9,7 @@ static void __cpuinit init_transmeta(struct cpuinfo_x86 *c)
{
unsigned int cap_mask, uk, max, dummy;
unsigned int cms_rev1, cms_rev2;
unsigned int cpu_rev, cpu_freq, cpu_flags, new_cpu_rev;
unsigned int cpu_rev, cpu_freq = 0, cpu_flags, new_cpu_rev;
char cpu_info[65];
get_model_name(c); /* Same as AMD/Cyrix */
......@@ -73,6 +73,9 @@ static void __cpuinit init_transmeta(struct cpuinfo_x86 *c)
c->x86_capability[0] = cpuid_edx(0x00000001);
wrmsr(0x80860004, cap_mask, uk);
/* All Transmeta CPUs have a constant TSC */
set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
/* If we can run i686 user-space code, call us an i686 */
#define USER686 (X86_FEATURE_TSC|X86_FEATURE_CX8|X86_FEATURE_CMOV)
if ( c->x86 == 5 && (c->x86_capability[0] & USER686) == USER686 )
......
......@@ -48,7 +48,6 @@ static struct class *cpuid_class;
#ifdef CONFIG_SMP
struct cpuid_command {
int cpu;
u32 reg;
u32 *data;
};
......@@ -57,7 +56,6 @@ static void cpuid_smp_cpuid(void *cmd_block)
{
struct cpuid_command *cmd = (struct cpuid_command *)cmd_block;
if (cmd->cpu == smp_processor_id())
cpuid(cmd->reg, &cmd->data[0], &cmd->data[1], &cmd->data[2],
&cmd->data[3]);
}
......@@ -70,11 +68,10 @@ static inline void do_cpuid(int cpu, u32 reg, u32 * data)
if (cpu == smp_processor_id()) {
cpuid(reg, &data[0], &data[1], &data[2], &data[3]);
} else {
cmd.cpu = cpu;
cmd.reg = reg;
cmd.data = data;
smp_call_function(cpuid_smp_cpuid, &cmd, 1, 1);
smp_call_function_single(cpu, cpuid_smp_cpuid, &cmd, 1, 1);
}
preempt_enable();
}
......
......@@ -14,6 +14,7 @@
#include <asm/pgtable.h>
#include <asm/page.h>
#include <asm/e820.h>
#include <asm/setup.h>
#ifdef CONFIG_EFI
int efi_enabled = 0;
......@@ -156,21 +157,22 @@ static struct resource standard_io_resources[] = { {
.flags = IORESOURCE_BUSY | IORESOURCE_IO
} };
static int romsignature(const unsigned char *x)
#define ROMSIGNATURE 0xaa55
static int __init romsignature(const unsigned char *rom)
{
unsigned short sig;
int ret = 0;
if (probe_kernel_address((const unsigned short *)x, sig) == 0)
ret = (sig == 0xaa55);
return ret;
return probe_kernel_address((const unsigned short *)rom, sig) == 0 &&
sig == ROMSIGNATURE;
}
static int __init romchecksum(unsigned char *rom, unsigned long length)
{
unsigned char *p, sum = 0;
unsigned char sum;
for (p = rom; p < rom + length; p++)
sum += *p;
for (sum = 0; length; length--)
sum += *rom++;
return sum == 0;
}
......
......@@ -30,7 +30,7 @@
* 18(%esp) - %eax
* 1C(%esp) - %ds
* 20(%esp) - %es
* 24(%esp) - %gs
* 24(%esp) - %fs
* 28(%esp) - orig_eax
* 2C(%esp) - %eip
* 30(%esp) - %cs
......@@ -99,9 +99,9 @@ VM_MASK = 0x00020000
#define SAVE_ALL \
cld; \
pushl %gs; \
pushl %fs; \
CFI_ADJUST_CFA_OFFSET 4;\
/*CFI_REL_OFFSET gs, 0;*/\
/*CFI_REL_OFFSET fs, 0;*/\
pushl %es; \
CFI_ADJUST_CFA_OFFSET 4;\
/*CFI_REL_OFFSET es, 0;*/\
......@@ -133,7 +133,7 @@ VM_MASK = 0x00020000
movl %edx, %ds; \
movl %edx, %es; \
movl $(__KERNEL_PDA), %edx; \
movl %edx, %gs
movl %edx, %fs
#define RESTORE_INT_REGS \
popl %ebx; \
......@@ -166,9 +166,9 @@ VM_MASK = 0x00020000
2: popl %es; \
CFI_ADJUST_CFA_OFFSET -4;\
/*CFI_RESTORE es;*/\
3: popl %gs; \
3: popl %fs; \
CFI_ADJUST_CFA_OFFSET -4;\
/*CFI_RESTORE gs;*/\
/*CFI_RESTORE fs;*/\
.pushsection .fixup,"ax"; \
4: movl $0,(%esp); \
jmp 1b; \
......@@ -227,6 +227,7 @@ ENTRY(ret_from_fork)
CFI_ADJUST_CFA_OFFSET -4
jmp syscall_exit
CFI_ENDPROC
END(ret_from_fork)
/*
* Return to user mode is not as complex as all this looks,
......@@ -258,6 +259,7 @@ ENTRY(resume_userspace)
# int/exception return?
jne work_pending
jmp restore_all
END(ret_from_exception)
#ifdef CONFIG_PREEMPT
ENTRY(resume_kernel)
......@@ -272,6 +274,7 @@ need_resched:
jz restore_all
call preempt_schedule_irq
jmp need_resched
END(resume_kernel)
#endif
CFI_ENDPROC
......@@ -349,16 +352,17 @@ sysenter_past_esp:
movl PT_OLDESP(%esp), %ecx
xorl %ebp,%ebp
TRACE_IRQS_ON
1: mov PT_GS(%esp), %gs
1: mov PT_FS(%esp), %fs
ENABLE_INTERRUPTS_SYSEXIT
CFI_ENDPROC
.pushsection .fixup,"ax"
2: movl $0,PT_GS(%esp)
2: movl $0,PT_FS(%esp)
jmp 1b
.section __ex_table,"a"
.align 4
.long 1b,2b
.popsection
ENDPROC(sysenter_entry)
# system call handler stub
ENTRY(system_call)
......@@ -459,6 +463,7 @@ ldt_ss:
CFI_ADJUST_CFA_OFFSET -8
jmp restore_nocheck
CFI_ENDPROC
ENDPROC(system_call)
# perform work that needs to be done immediately before resumption
ALIGN
......@@ -504,6 +509,7 @@ work_notifysig_v86:
xorl %edx, %edx
call do_notify_resume
jmp resume_userspace_sig
END(work_pending)
# perform syscall exit tracing
ALIGN
......@@ -519,6 +525,7 @@ syscall_trace_entry:
cmpl $(nr_syscalls), %eax
jnae syscall_call
jmp syscall_exit
END(syscall_trace_entry)
# perform syscall exit tracing
ALIGN
......@@ -532,6 +539,7 @@ syscall_exit_work:
movl $1, %edx
call do_syscall_trace
jmp resume_userspace
END(syscall_exit_work)
CFI_ENDPROC
RING0_INT_FRAME # can't unwind into user space anyway
......@@ -542,15 +550,17 @@ syscall_fault:
GET_THREAD_INFO(%ebp)
movl $-EFAULT,PT_EAX(%esp)
jmp resume_userspace
END(syscall_fault)
syscall_badsys:
movl $-ENOSYS,PT_EAX(%esp)
jmp resume_userspace
END(syscall_badsys)
CFI_ENDPROC
#define FIXUP_ESPFIX_STACK \
/* since we are on a wrong stack, we cant make it a C code :( */ \
movl %gs:PDA_cpu, %ebx; \
movl %fs:PDA_cpu, %ebx; \
PER_CPU(cpu_gdt_descr, %ebx); \
movl GDS_address(%ebx), %ebx; \
GET_DESC_BASE(GDT_ENTRY_ESPFIX_SS, %ebx, %eax, %ax, %al, %ah); \
......@@ -581,9 +591,9 @@ syscall_badsys:
ENTRY(interrupt)
.text
vector=0
ENTRY(irq_entries_start)
RING0_INT_FRAME
vector=0
.rept NR_IRQS
ALIGN
.if vector
......@@ -592,11 +602,16 @@ ENTRY(irq_entries_start)
1: pushl $~(vector)
CFI_ADJUST_CFA_OFFSET 4
jmp common_interrupt
.data
.previous
.long 1b
.text
.text
vector=vector+1
.endr
END(irq_entries_start)
.previous
END(interrupt)
.previous
/*
* the CPU automatically disables interrupts when executing an IRQ vector,
......@@ -609,6 +624,7 @@ common_interrupt:
movl %esp,%eax
call do_IRQ
jmp ret_from_intr
ENDPROC(common_interrupt)
CFI_ENDPROC
#define BUILD_INTERRUPT(name, nr) \
......@@ -621,18 +637,24 @@ ENTRY(name) \
movl %esp,%eax; \
call smp_/**/name; \
jmp ret_from_intr; \
CFI_ENDPROC
CFI_ENDPROC; \
ENDPROC(name)
/* The include is where all of the SMP etc. interrupts come from */
#include "entry_arch.h"
/* This alternate entry is needed because we hijack the apic LVTT */
#if defined(CONFIG_VMI) && defined(CONFIG_X86_LOCAL_APIC)
BUILD_INTERRUPT(apic_vmi_timer_interrupt,LOCAL_TIMER_VECTOR)
#endif
KPROBE_ENTRY(page_fault)
RING0_EC_FRAME
pushl $do_page_fault
CFI_ADJUST_CFA_OFFSET 4
ALIGN
error_code:
/* the function address is in %gs's slot on the stack */
/* the function address is in %fs's slot on the stack */
pushl %es
CFI_ADJUST_CFA_OFFSET 4
/*CFI_REL_OFFSET es, 0*/
......@@ -661,20 +683,20 @@ error_code:
CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET ebx, 0
cld
pushl %gs
pushl %fs
CFI_ADJUST_CFA_OFFSET 4
/*CFI_REL_OFFSET gs, 0*/
/*CFI_REL_OFFSET fs, 0*/
movl $(__KERNEL_PDA), %ecx
movl %ecx, %gs
movl %ecx, %fs
UNWIND_ESPFIX_STACK
popl %ecx
CFI_ADJUST_CFA_OFFSET -4
/*CFI_REGISTER es, ecx*/
movl PT_GS(%esp), %edi # get the function address
movl PT_FS(%esp), %edi # get the function address
movl PT_ORIG_EAX(%esp), %edx # get the error code
movl $-1, PT_ORIG_EAX(%esp) # no syscall to restart
mov %ecx, PT_GS(%esp)
/*CFI_REL_OFFSET gs, ES*/
mov %ecx, PT_FS(%esp)
/*CFI_REL_OFFSET fs, ES*/
movl $(__USER_DS), %ecx
movl %ecx, %ds
movl %ecx, %es
......@@ -692,6 +714,7 @@ ENTRY(coprocessor_error)
CFI_ADJUST_CFA_OFFSET 4
jmp error_code
CFI_ENDPROC
END(coprocessor_error)
ENTRY(simd_coprocessor_error)
RING0_INT_FRAME
......@@ -701,6 +724,7 @@ ENTRY(simd_coprocessor_error)
CFI_ADJUST_CFA_OFFSET 4
jmp error_code
CFI_ENDPROC
END(simd_coprocessor_error)
ENTRY(device_not_available)
RING0_INT_FRAME
......@@ -721,6 +745,7 @@ device_not_available_emulate:
CFI_ADJUST_CFA_OFFSET -4
jmp ret_from_exception
CFI_ENDPROC
END(device_not_available)
/*
* Debug traps and NMI can happen at the one SYSENTER instruction
......@@ -864,10 +889,12 @@ ENTRY(native_iret)
.align 4
.long 1b,iret_exc
.previous
END(native_iret)
ENTRY(native_irq_enable_sysexit)
sti
sysexit
END(native_irq_enable_sysexit)
#endif
KPROBE_ENTRY(int3)
......@@ -890,6 +917,7 @@ ENTRY(overflow)
CFI_ADJUST_CFA_OFFSET 4
jmp error_code
CFI_ENDPROC
END(overflow)
ENTRY(bounds)
RING0_INT_FRAME
......@@ -899,6 +927,7 @@ ENTRY(bounds)
CFI_ADJUST_CFA_OFFSET 4
jmp error_code
CFI_ENDPROC
END(bounds)
ENTRY(invalid_op)
RING0_INT_FRAME
......@@ -908,6 +937,7 @@ ENTRY(invalid_op)
CFI_ADJUST_CFA_OFFSET 4
jmp error_code
CFI_ENDPROC
END(invalid_op)
ENTRY(coprocessor_segment_overrun)
RING0_INT_FRAME
......@@ -917,6 +947,7 @@ ENTRY(coprocessor_segment_overrun)
CFI_ADJUST_CFA_OFFSET 4
jmp error_code
CFI_ENDPROC
END(coprocessor_segment_overrun)
ENTRY(invalid_TSS)
RING0_EC_FRAME
......@@ -924,6 +955,7 @@ ENTRY(invalid_TSS)
CFI_ADJUST_CFA_OFFSET 4
jmp error_code
CFI_ENDPROC
END(invalid_TSS)
ENTRY(segment_not_present)
RING0_EC_FRAME
......@@ -931,6 +963,7 @@ ENTRY(segment_not_present)
CFI_ADJUST_CFA_OFFSET 4
jmp error_code
CFI_ENDPROC
END(segment_not_present)
ENTRY(stack_segment)
RING0_EC_FRAME
......@@ -938,6 +971,7 @@ ENTRY(stack_segment)
CFI_ADJUST_CFA_OFFSET 4
jmp error_code
CFI_ENDPROC
END(stack_segment)
KPROBE_ENTRY(general_protection)
RING0_EC_FRAME
......@@ -953,6 +987,7 @@ ENTRY(alignment_check)
CFI_ADJUST_CFA_OFFSET 4
jmp error_code
CFI_ENDPROC
END(alignment_check)
ENTRY(divide_error)
RING0_INT_FRAME
......@@ -962,6 +997,7 @@ ENTRY(divide_error)
CFI_ADJUST_CFA_OFFSET 4
jmp error_code
CFI_ENDPROC
END(divide_error)
#ifdef CONFIG_X86_MCE
ENTRY(machine_check)
......@@ -972,6 +1008,7 @@ ENTRY(machine_check)
CFI_ADJUST_CFA_OFFSET 4
jmp error_code
CFI_ENDPROC
END(machine_check)
#endif
ENTRY(spurious_interrupt_bug)
......@@ -982,6 +1019,7 @@ ENTRY(spurious_interrupt_bug)
CFI_ADJUST_CFA_OFFSET 4
jmp error_code
CFI_ENDPROC
END(spurious_interrupt_bug)
ENTRY(kernel_thread_helper)
pushl $0 # fake return address for unwinder
......
......@@ -53,6 +53,7 @@
* any particular GDT layout, because we load our own as soon as we
* can.
*/
.section .text.head,"ax",@progbits
ENTRY(startup_32)
#ifdef CONFIG_PARAVIRT
......@@ -141,16 +142,25 @@ page_pde_offset = (__PAGE_OFFSET >> 20);
jb 10b
movl %edi,(init_pg_tables_end - __PAGE_OFFSET)
#ifdef CONFIG_SMP
xorl %ebx,%ebx /* This is the boot CPU (BSP) */
jmp 3f
/*
* Non-boot CPU entry point; entered from trampoline.S
* We can't lgdt here, because lgdt itself uses a data segment, but
* we know the trampoline has already loaded the boot_gdt_table GDT
* for us.
*
* If cpu hotplug is not supported then this code can go in init section
* which will be freed later
*/
#ifdef CONFIG_HOTPLUG_CPU
.section .text,"ax",@progbits
#else
.section .init.text,"ax",@progbits
#endif
#ifdef CONFIG_SMP
ENTRY(startup_32_smp)
cld
movl $(__BOOT_DS),%eax
......@@ -208,8 +218,8 @@ ENTRY(startup_32_smp)
xorl %ebx,%ebx
incl %ebx
3:
#endif /* CONFIG_SMP */
3:
/*
* Enable paging
......@@ -309,7 +319,7 @@ is386: movl $2,%ecx # set MP
call check_x87
call setup_pda
lgdt cpu_gdt_descr
lgdt early_gdt_descr
lidt idt_descr
ljmp $(__KERNEL_CS),$1f
1: movl $(__KERNEL_DS),%eax # reload all the segment registers
......@@ -319,12 +329,12 @@ is386: movl $2,%ecx # set MP
movl %eax,%ds
movl %eax,%es
xorl %eax,%eax # Clear FS and LDT
movl %eax,%fs
xorl %eax,%eax # Clear GS and LDT
movl %eax,%gs
lldt %ax
movl $(__KERNEL_PDA),%eax
mov %eax,%gs
mov %eax,%fs
cld # gcc2 wants the direction flag cleared at all times
pushl $0 # fake return address for unwinder
......@@ -360,12 +370,12 @@ check_x87:
* cpu_gdt_table and boot_pda; for secondary CPUs, these will be
* that CPU's GDT and PDA.
*/
setup_pda:
ENTRY(setup_pda)
/* get the PDA pointer */
movl start_pda, %eax
/* slot the PDA address into the GDT */
mov cpu_gdt_descr+2, %ecx
mov early_gdt_descr+2, %ecx
mov %ax, (__KERNEL_PDA+0+2)(%ecx) /* base & 0x0000ffff */
shr $16, %eax
mov %al, (__KERNEL_PDA+4+0)(%ecx) /* base & 0x00ff0000 */
......@@ -492,6 +502,7 @@ ignore_int:
#endif
iret
.section .text
#ifdef CONFIG_PARAVIRT
startup_paravirt:
cld
......@@ -502,10 +513,11 @@ startup_paravirt:
pushl %ecx
pushl %eax
/* paravirt.o is last in link, and that probe fn never returns */
pushl $__start_paravirtprobe
1:
movl 0(%esp), %eax
cmpl $__stop_paravirtprobe, %eax
je unhandled_paravirt
pushl (%eax)
movl 8(%esp), %eax
call *(%esp)
......@@ -517,6 +529,10 @@ startup_paravirt:
addl $4, (%esp)
jmp 1b
unhandled_paravirt:
/* Nothing wanted us: we're screwed. */
ud2
#endif
/*
......@@ -581,7 +597,7 @@ idt_descr:
# boot GDT descriptor (later on used by CPU#0):
.word 0 # 32 bit align gdt_desc.address
ENTRY(cpu_gdt_descr)
ENTRY(early_gdt_descr)
.word GDT_ENTRIES*8-1
.long cpu_gdt_table
......
......@@ -1920,7 +1920,7 @@ static void __init setup_ioapic_ids_from_mpc(void)
static void __init setup_ioapic_ids_from_mpc(void) { }
#endif
static int no_timer_check __initdata;
int no_timer_check __initdata;
static int __init notimercheck(char *s)
{
......@@ -2310,7 +2310,7 @@ static inline void __init check_timer(void)
disable_8259A_irq(0);
set_irq_chip_and_handler_name(0, &lapic_chip, handle_fasteoi_irq,
"fasteio");
"fasteoi");
apic_write_around(APIC_LVT0, APIC_DM_FIXED | vector); /* Fixed mode */
enable_8259A_irq(0);
......
......@@ -19,6 +19,8 @@
#include <linux/cpu.h>
#include <linux/delay.h>
#include <asm/idle.h>
DEFINE_PER_CPU(irq_cpustat_t, irq_stat) ____cacheline_internodealigned_in_smp;
EXPORT_PER_CPU_SYMBOL(irq_stat);
......@@ -61,6 +63,7 @@ fastcall unsigned int do_IRQ(struct pt_regs *regs)
union irq_ctx *curctx, *irqctx;
u32 *isp;
#endif
exit_idle();
if (unlikely((unsigned)irq >= NR_IRQS)) {
printk(KERN_EMERG "%s: cannot handle IRQ %d\n",
......
......@@ -363,7 +363,7 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
" pushf\n"
/* skip cs, eip, orig_eax */
" subl $12, %esp\n"
" pushl %gs\n"
" pushl %fs\n"
" pushl %ds\n"
" pushl %es\n"
" pushl %eax\n"
......@@ -387,7 +387,7 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
" popl %edi\n"
" popl %ebp\n"
" popl %eax\n"
/* skip eip, orig_eax, es, ds, gs */
/* skip eip, orig_eax, es, ds, fs */
" addl $20, %esp\n"
" popf\n"
" ret\n");
......@@ -408,7 +408,7 @@ fastcall void *__kprobes trampoline_handler(struct pt_regs *regs)
spin_lock_irqsave(&kretprobe_lock, flags);
head = kretprobe_inst_table_head(current);
/* fixup registers */
regs->xcs = __KERNEL_CS;
regs->xcs = __KERNEL_CS | get_kernel_rpl();
regs->eip = trampoline_address;
regs->orig_eax = 0xffffffff;
......
......@@ -384,7 +384,7 @@ static int do_microcode_update (void)
{
long cursor = 0;
int error = 0;
void *new_mc;
void *new_mc = NULL;
int cpu;
cpumask_t old;
......
......@@ -68,7 +68,6 @@ static inline int rdmsr_eio(u32 reg, u32 *eax, u32 *edx)
#ifdef CONFIG_SMP
struct msr_command {
int cpu;
int err;
u32 reg;
u32 data[2];
......@@ -78,7 +77,6 @@ static void msr_smp_wrmsr(void *cmd_block)
{
struct msr_command *cmd = (struct msr_command *)cmd_block;
if (cmd->cpu == smp_processor_id())
cmd->err = wrmsr_eio(cmd->reg, cmd->data[0], cmd->data[1]);
}
......@@ -86,7 +84,6 @@ static void msr_smp_rdmsr(void *cmd_block)
{
struct msr_command *cmd = (struct msr_command *)cmd_block;
if (cmd->cpu == smp_processor_id())
cmd->err = rdmsr_eio(cmd->reg, &cmd->data[0], &cmd->data[1]);
}
......@@ -99,12 +96,11 @@ static inline int do_wrmsr(int cpu, u32 reg, u32 eax, u32 edx)
if (cpu == smp_processor_id()) {
ret = wrmsr_eio(reg, eax, edx);
} else {
cmd.cpu = cpu;
cmd.reg = reg;
cmd.data[0] = eax;
cmd.data[1] = edx;
smp_call_function(msr_smp_wrmsr, &cmd, 1, 1);
smp_call_function_single(cpu, msr_smp_wrmsr, &cmd, 1, 1);
ret = cmd.err;
}
preempt_enable();
......@@ -120,10 +116,9 @@ static inline int do_rdmsr(int cpu, u32 reg, u32 * eax, u32 * edx)
if (cpu == smp_processor_id()) {
ret = rdmsr_eio(reg, eax, edx);
} else {
cmd.cpu = cpu;
cmd.reg = reg;
smp_call_function(msr_smp_rdmsr, &cmd, 1, 1);
smp_call_function_single(cpu, msr_smp_rdmsr, &cmd, 1, 1);
*eax = cmd.data[0];
*edx = cmd.data[1];
......
......@@ -185,7 +185,8 @@ static __cpuinit inline int nmi_known_cpu(void)
{
switch (boot_cpu_data.x86_vendor) {
case X86_VENDOR_AMD:
return ((boot_cpu_data.x86 == 15) || (boot_cpu_data.x86 == 6));
return ((boot_cpu_data.x86 == 15) || (boot_cpu_data.x86 == 6)
|| (boot_cpu_data.x86 == 16));
case X86_VENDOR_INTEL:
if (cpu_has(&boot_cpu_data, X86_FEATURE_ARCH_PERFMON))
return 1;
......@@ -216,6 +217,28 @@ static __init void nmi_cpu_busy(void *data)
}
#endif
static unsigned int adjust_for_32bit_ctr(unsigned int hz)
{
u64 counter_val;
unsigned int retval = hz;
/*
* On Intel CPUs with P6/ARCH_PERFMON only 32 bits in the counter
* are writable, with higher bits sign extending from bit 31.
* So, we can only program the counter with 31 bit values and
* 32nd bit should be 1, for 33.. to be 1.
* Find the appropriate nmi_hz
*/
counter_val = (u64)cpu_khz * 1000;
do_div(counter_val, retval);
if (counter_val > 0x7fffffffULL) {
u64 count = (u64)cpu_khz * 1000;
do_div(count, 0x7fffffffUL);
retval = count + 1;
}
return retval;
}
static int __init check_nmi_watchdog(void)
{
unsigned int *prev_nmi_count;
......@@ -281,18 +304,10 @@ static int __init check_nmi_watchdog(void)
struct nmi_watchdog_ctlblk *wd = &__get_cpu_var(nmi_watchdog_ctlblk);
nmi_hz = 1;
/*
* On Intel CPUs with ARCH_PERFMON only 32 bits in the counter
* are writable, with higher bits sign extending from bit 31.
* So, we can only program the counter with 31 bit values and
* 32nd bit should be 1, for 33.. to be 1.
* Find the appropriate nmi_hz
*/
if (wd->perfctr_msr == MSR_ARCH_PERFMON_PERFCTR0 &&
((u64)cpu_khz * 1000) > 0x7fffffffULL) {
u64 count = (u64)cpu_khz * 1000;
do_div(count, 0x7fffffffUL);
nmi_hz = count + 1;
if (wd->perfctr_msr == MSR_P6_PERFCTR0 ||
wd->perfctr_msr == MSR_ARCH_PERFMON_PERFCTR0) {
nmi_hz = adjust_for_32bit_ctr(nmi_hz);
}
}
......@@ -369,6 +384,34 @@ void enable_timer_nmi_watchdog(void)
}
}
static void __acpi_nmi_disable(void *__unused)
{
apic_write_around(APIC_LVT0, APIC_DM_NMI | APIC_LVT_MASKED);
}
/*
* Disable timer based NMIs on all CPUs:
*/
void acpi_nmi_disable(void)
{
if (atomic_read(&nmi_active) && nmi_watchdog == NMI_IO_APIC)
on_each_cpu(__acpi_nmi_disable, NULL, 0, 1);
}
static void __acpi_nmi_enable(void *__unused)
{
apic_write_around(APIC_LVT0, APIC_DM_NMI);
}
/*
* Enable timer based NMIs on all CPUs:
*/
void acpi_nmi_enable(void)
{
if (atomic_read(&nmi_active) && nmi_watchdog == NMI_IO_APIC)
on_each_cpu(__acpi_nmi_enable, NULL, 0, 1);
}
#ifdef CONFIG_PM
static int nmi_pm_active; /* nmi_active before suspend */
......@@ -442,6 +485,17 @@ static void write_watchdog_counter(unsigned int perfctr_msr, const char *descr)
wrmsrl(perfctr_msr, 0 - count);
}
static void write_watchdog_counter32(unsigned int perfctr_msr,
const char *descr)
{
u64 count = (u64)cpu_khz * 1000;
do_div(count, nmi_hz);
if(descr)
Dprintk("setting %s to -0x%08Lx\n", descr, count);
wrmsr(perfctr_msr, (u32)(-count), 0);
}
/* Note that these events don't tick when the CPU idles. This means
the frequency varies with CPU load. */
......@@ -531,7 +585,8 @@ static int setup_p6_watchdog(void)
/* setup the timer */
wrmsr(evntsel_msr, evntsel, 0);
write_watchdog_counter(perfctr_msr, "P6_PERFCTR0");
nmi_hz = adjust_for_32bit_ctr(nmi_hz);
write_watchdog_counter32(perfctr_msr, "P6_PERFCTR0");
apic_write(APIC_LVTPC, APIC_DM_NMI);
evntsel |= P6_EVNTSEL0_ENABLE;
wrmsr(evntsel_msr, evntsel, 0);
......@@ -704,7 +759,8 @@ static int setup_intel_arch_watchdog(void)
/* setup the timer */
wrmsr(evntsel_msr, evntsel, 0);
write_watchdog_counter(perfctr_msr, "INTEL_ARCH_PERFCTR0");
nmi_hz = adjust_for_32bit_ctr(nmi_hz);
write_watchdog_counter32(perfctr_msr, "INTEL_ARCH_PERFCTR0");
apic_write(APIC_LVTPC, APIC_DM_NMI);
evntsel |= ARCH_PERFMON_EVENTSEL0_ENABLE;
wrmsr(evntsel_msr, evntsel, 0);
......@@ -762,7 +818,8 @@ void setup_apic_nmi_watchdog (void *unused)
if (nmi_watchdog == NMI_LOCAL_APIC) {
switch (boot_cpu_data.x86_vendor) {
case X86_VENDOR_AMD:
if (boot_cpu_data.x86 != 6 && boot_cpu_data.x86 != 15)
if (boot_cpu_data.x86 != 6 && boot_cpu_data.x86 != 15 &&
boot_cpu_data.x86 != 16)
return;
if (!setup_k7_watchdog())
return;
......@@ -956,6 +1013,8 @@ __kprobes int nmi_watchdog_tick(struct pt_regs * regs, unsigned reason)
dummy &= ~P4_CCCR_OVF;
wrmsrl(wd->cccr_msr, dummy);
apic_write(APIC_LVTPC, APIC_DM_NMI);
/* start the cycle over again */
write_watchdog_counter(wd->perfctr_msr, NULL);
}
else if (wd->perfctr_msr == MSR_P6_PERFCTR0 ||
wd->perfctr_msr == MSR_ARCH_PERFMON_PERFCTR0) {
......@@ -964,9 +1023,12 @@ __kprobes int nmi_watchdog_tick(struct pt_regs * regs, unsigned reason)
* other P6 variant.
* ArchPerfom/Core Duo also needs this */
apic_write(APIC_LVTPC, APIC_DM_NMI);
}
/* P6/ARCH_PERFMON has 32 bit counter write */
write_watchdog_counter32(wd->perfctr_msr, NULL);
} else {
/* start the cycle over again */
write_watchdog_counter(wd->perfctr_msr, NULL);
}
rc = 1;
} else if (nmi_watchdog == NMI_IO_APIC) {
/* don't know how to accurately check for this.
......
This diff is collapsed.
#include <linux/platform_device.h>
#include <linux/errno.h>
#include <linux/init.h>
static __init int add_pcspkr(void)
{
struct platform_device *pd;
int ret;
pd = platform_device_alloc("pcspkr", -1);
if (!pd)
return -ENOMEM;
ret = platform_device_add(pd);
if (ret)
platform_device_put(pd);
return ret;
}
device_initcall(add_pcspkr);
......@@ -48,6 +48,7 @@
#include <asm/i387.h>
#include <asm/desc.h>
#include <asm/vm86.h>
#include <asm/idle.h>
#ifdef CONFIG_MATH_EMULATION
#include <asm/math_emu.h>
#endif
......@@ -80,6 +81,42 @@ void (*pm_idle)(void);
EXPORT_SYMBOL(pm_idle);
static DEFINE_PER_CPU(unsigned int, cpu_idle_state);
static ATOMIC_NOTIFIER_HEAD(idle_notifier);
void idle_notifier_register(struct notifier_block *n)
{
atomic_notifier_chain_register(&idle_notifier, n);
}
void idle_notifier_unregister(struct notifier_block *n)
{
atomic_notifier_chain_unregister(&idle_notifier, n);
}
static DEFINE_PER_CPU(volatile unsigned long, idle_state);
void enter_idle(void)
{
/* needs to be atomic w.r.t. interrupts, not against other CPUs */
__set_bit(0, &__get_cpu_var(idle_state));
atomic_notifier_call_chain(&idle_notifier, IDLE_START, NULL);
}
static void __exit_idle(void)
{
/* needs to be atomic w.r.t. interrupts, not against other CPUs */
if (__test_and_clear_bit(0, &__get_cpu_var(idle_state)) == 0)
return;
atomic_notifier_call_chain(&idle_notifier, IDLE_END, NULL);
}
void exit_idle(void)
{
if (current->pid)
return;
__exit_idle();
}
void disable_hlt(void)
{
hlt_counter++;
......@@ -130,6 +167,7 @@ EXPORT_SYMBOL(default_idle);
*/
static void poll_idle (void)
{
local_irq_enable();
cpu_relax();
}
......@@ -189,7 +227,16 @@ void cpu_idle(void)
play_dead();
__get_cpu_var(irq_stat).idle_timestamp = jiffies;
/*
* Idle routines should keep interrupts disabled
* from here on, until they go to idle.
* Otherwise, idle callbacks can misfire.
*/
local_irq_disable();
enter_idle();
idle();
__exit_idle();
}
preempt_enable_no_resched();
schedule();
......@@ -243,7 +290,11 @@ void mwait_idle_with_hints(unsigned long eax, unsigned long ecx)
__monitor((void *)&current_thread_info()->flags, 0, 0);
smp_mb();
if (!need_resched())
__mwait(eax, ecx);
__sti_mwait(eax, ecx);
else
local_irq_enable();
} else {
local_irq_enable();
}
}
......@@ -308,8 +359,8 @@ void show_regs(struct pt_regs * regs)
regs->eax,regs->ebx,regs->ecx,regs->edx);
printk("ESI: %08lx EDI: %08lx EBP: %08lx",
regs->esi, regs->edi, regs->ebp);
printk(" DS: %04x ES: %04x GS: %04x\n",
0xffff & regs->xds,0xffff & regs->xes, 0xffff & regs->xgs);
printk(" DS: %04x ES: %04x FS: %04x\n",
0xffff & regs->xds,0xffff & regs->xes, 0xffff & regs->xfs);
cr0 = read_cr0();
cr2 = read_cr2();
......@@ -340,7 +391,7 @@ int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags)
regs.xds = __USER_DS;
regs.xes = __USER_DS;
regs.xgs = __KERNEL_PDA;
regs.xfs = __KERNEL_PDA;
regs.orig_eax = -1;
regs.eip = (unsigned long) kernel_thread_helper;
regs.xcs = __KERNEL_CS | get_kernel_rpl();
......@@ -425,7 +476,7 @@ int copy_thread(int nr, unsigned long clone_flags, unsigned long esp,
p->thread.eip = (unsigned long) ret_from_fork;
savesegment(fs,p->thread.fs);
savesegment(gs,p->thread.gs);
tsk = current;
if (unlikely(test_tsk_thread_flag(tsk, TIF_IO_BITMAP))) {
......@@ -501,8 +552,8 @@ void dump_thread(struct pt_regs * regs, struct user * dump)
dump->regs.eax = regs->eax;
dump->regs.ds = regs->xds;
dump->regs.es = regs->xes;
savesegment(fs,dump->regs.fs);
dump->regs.gs = regs->xgs;
dump->regs.fs = regs->xfs;
savesegment(gs,dump->regs.gs);
dump->regs.orig_eax = regs->orig_eax;
dump->regs.eip = regs->eip;
dump->regs.cs = regs->xcs;
......@@ -653,7 +704,7 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas
load_esp0(tss, next);
/*
* Save away %fs. No need to save %gs, as it was saved on the
* Save away %gs. No need to save %fs, as it was saved on the
* stack on entry. No need to save %es and %ds, as those are
* always kernel segments while inside the kernel. Doing this
* before setting the new TLS descriptors avoids the situation
......@@ -662,7 +713,7 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas
* used %fs or %gs (it does not today), or if the kernel is
* running inside of a hypervisor layer.
*/
savesegment(fs, prev->fs);
savesegment(gs, prev->gs);
/*
* Load the per-thread Thread-Local Storage descriptor.
......@@ -670,14 +721,13 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas
load_TLS(next, cpu);
/*
* Restore %fs if needed.
*
* Glibc normally makes %fs be zero.
* Restore IOPL if needed. In normal use, the flags restore
* in the switch assembly will handle this. But if the kernel
* is running virtualized at a non-zero CPL, the popf will
* not restore flags, so it must be done in a separate step.
*/
if (unlikely(prev->fs | next->fs))
loadsegment(fs, next->fs);
write_pda(pcurrent, next_p);
if (get_kernel_rpl() && unlikely(prev->iopl != next->iopl))
set_iopl_mask(next->iopl);
/*
* Now maybe handle debug registers and/or IO bitmaps
......@@ -688,6 +738,15 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas
disable_tsc(prev_p, next_p);
/*
* Leave lazy mode, flushing any hypercalls made here.
* This must be done before restoring TLS segments so
* the GDT and LDT are properly updated, and must be
* done before math_state_restore, so the TS bit is up
* to date.
*/
arch_leave_lazy_cpu_mode();
/* If the task has used fpu the last 5 timeslices, just do a full
* restore of the math state immediately to avoid the trap; the
* chances of needing FPU soon are obviously high now
......@@ -695,6 +754,14 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas
if (next_p->fpu_counter > 5)
math_state_restore();
/*
* Restore %gs if needed (which is common)
*/
if (prev->gs | next->gs)
loadsegment(gs, next->gs);
write_pda(pcurrent, next_p);
return prev_p;
}
......
......@@ -89,14 +89,14 @@ static int putreg(struct task_struct *child,
unsigned long regno, unsigned long value)
{
switch (regno >> 2) {
case FS:
case GS:
if (value && (value & 3) != 3)
return -EIO;
child->thread.fs = value;
child->thread.gs = value;
return 0;
case DS:
case ES:
case GS:
case FS:
if (value && (value & 3) != 3)
return -EIO;
value &= 0xffff;
......@@ -112,7 +112,7 @@ static int putreg(struct task_struct *child,
value |= get_stack_long(child, EFL_OFFSET) & ~FLAG_MASK;
break;
}
if (regno > ES*4)
if (regno > FS*4)
regno -= 1*4;
put_stack_long(child, regno, value);
return 0;
......@@ -124,18 +124,18 @@ static unsigned long getreg(struct task_struct *child,
unsigned long retval = ~0UL;
switch (regno >> 2) {
case FS:
retval = child->thread.fs;
case GS:
retval = child->thread.gs;
break;
case DS:
case ES:
case GS:
case FS:
case SS:
case CS:
retval = 0xffff;
/* fall through */
default:
if (regno > ES*4)
if (regno > FS*4)
regno -= 1*4;
retval &= get_stack_long(child, regno);
}
......
......@@ -33,7 +33,6 @@
#include <linux/initrd.h>
#include <linux/bootmem.h>
#include <linux/seq_file.h>
#include <linux/platform_device.h>
#include <linux/console.h>
#include <linux/mca.h>
#include <linux/root_dev.h>
......@@ -60,6 +59,7 @@
#include <asm/io_apic.h>
#include <asm/ist.h>
#include <asm/io.h>
#include <asm/vmi.h>
#include <setup_arch.h>
#include <bios_ebda.h>
......@@ -581,6 +581,14 @@ void __init setup_arch(char **cmdline_p)
max_low_pfn = setup_memory();
#ifdef CONFIG_VMI
/*
* Must be after max_low_pfn is determined, and before kernel
* pagetables are setup.
*/
vmi_init();
#endif
/*
* NOTE: before this point _nobody_ is allowed to allocate
* any memory using the bootmem allocator. Although the
......@@ -651,28 +659,3 @@ void __init setup_arch(char **cmdline_p)
#endif
tsc_init();
}
static __init int add_pcspkr(void)
{
struct platform_device *pd;
int ret;
pd = platform_device_alloc("pcspkr", -1);
if (!pd)
return -ENOMEM;
ret = platform_device_add(pd);
if (ret)
platform_device_put(pd);
return ret;
}
device_initcall(add_pcspkr);
/*
* Local Variables:
* mode:c
* c-file-style:"k&r"
* c-basic-offset:8
* End:
*/
......@@ -21,6 +21,7 @@
#include <linux/suspend.h>
#include <linux/ptrace.h>
#include <linux/elf.h>
#include <linux/binfmts.h>
#include <asm/processor.h>
#include <asm/ucontext.h>
#include <asm/uaccess.h>
......@@ -128,8 +129,8 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, int *peax
X86_EFLAGS_TF | X86_EFLAGS_SF | X86_EFLAGS_ZF | \
X86_EFLAGS_AF | X86_EFLAGS_PF | X86_EFLAGS_CF)
COPY_SEG(gs);
GET_SEG(fs);
GET_SEG(gs);
COPY_SEG(fs);
COPY_SEG(es);
COPY_SEG(ds);
COPY(edi);
......@@ -244,9 +245,9 @@ setup_sigcontext(struct sigcontext __user *sc, struct _fpstate __user *fpstate,
{
int tmp, err = 0;
err |= __put_user(regs->xgs, (unsigned int __user *)&sc->gs);
savesegment(fs, tmp);
err |= __put_user(tmp, (unsigned int __user *)&sc->fs);
err |= __put_user(regs->xfs, (unsigned int __user *)&sc->fs);
savesegment(gs, tmp);
err |= __put_user(tmp, (unsigned int __user *)&sc->gs);
err |= __put_user(regs->xes, (unsigned int __user *)&sc->es);
err |= __put_user(regs->xds, (unsigned int __user *)&sc->ds);
......@@ -349,7 +350,10 @@ static int setup_frame(int sig, struct k_sigaction *ka,
goto give_sigsegv;
}
if (current->binfmt->hasvdso)
restorer = (void *)VDSO_SYM(&__kernel_sigreturn);
else
restorer = (void *)&frame->retcode;
if (ka->sa.sa_flags & SA_RESTORER)
restorer = ka->sa.sa_restorer;
......
......@@ -23,6 +23,7 @@
#include <asm/mtrr.h>
#include <asm/tlbflush.h>
#include <asm/idle.h>
#include <mach_apic.h>
/*
......@@ -374,8 +375,7 @@ static void flush_tlb_others(cpumask_t cpumask, struct mm_struct *mm,
/*
* i'm not happy about this global shared spinlock in the
* MM hot path, but we'll see how contended it is.
* Temporarily this turns IRQs off, so that lockups are
* detected by the NMI watchdog.
* AK: x86-64 has a faster method that could be ported.
*/
spin_lock(&tlbstate_lock);
......@@ -400,7 +400,7 @@ static void flush_tlb_others(cpumask_t cpumask, struct mm_struct *mm,
while (!cpus_empty(flush_cpumask))
/* nothing. lockup detection does not belong here */
mb();
cpu_relax();
flush_mm = NULL;
flush_va = 0;
......@@ -624,6 +624,7 @@ fastcall void smp_call_function_interrupt(struct pt_regs *regs)
/*
* At this point the info structure may be out of scope unless wait==1
*/
exit_idle();
irq_enter();
(*func)(info);
irq_exit();
......
......@@ -63,6 +63,7 @@
#include <mach_apic.h>
#include <mach_wakecpu.h>
#include <smpboot_hooks.h>
#include <asm/vmi.h>
/* Set if we find a B stepping CPU */
static int __devinitdata smp_b_stepping;
......@@ -545,12 +546,15 @@ static void __cpuinit start_secondary(void *unused)
* booting is too fragile that we want to limit the
* things done here to the most necessary things.
*/
#ifdef CONFIG_VMI
vmi_bringup();
#endif
secondary_cpu_init();
preempt_disable();
smp_callin();
while (!cpu_isset(smp_processor_id(), smp_commenced_mask))
rep_nop();
setup_secondary_APIC_clock();
setup_secondary_clock();
if (nmi_watchdog == NMI_IO_APIC) {
disable_8259A_irq(0);
enable_NMI_through_LVT0(NULL);
......@@ -619,7 +623,6 @@ extern struct {
unsigned short ss;
} stack_start;
extern struct i386_pda *start_pda;
extern struct Xgt_desc_struct cpu_gdt_descr;
#ifdef CONFIG_NUMA
......@@ -834,6 +837,13 @@ wakeup_secondary_cpu(int phys_apicid, unsigned long start_eip)
else
num_starts = 0;
/*
* Paravirt / VMI wants a startup IPI hook here to set up the
* target processor state.
*/
startup_ipi_hook(phys_apicid, (unsigned long) start_secondary,
(unsigned long) stack_start.esp);
/*
* Run STARTUP IPI loop.
*/
......@@ -1320,7 +1330,7 @@ static void __init smp_boot_cpus(unsigned int max_cpus)
smpboot_setup_io_apic();
setup_boot_APIC_clock();
setup_boot_clock();
/*
* Synchronize the TSC with the AP
......
......@@ -78,7 +78,7 @@ int __init sysenter_setup(void)
syscall_pages[0] = virt_to_page(syscall_page);
#ifdef CONFIG_COMPAT_VDSO
__set_fixmap(FIX_VDSO, __pa(syscall_page), PAGE_READONLY);
__set_fixmap(FIX_VDSO, __pa(syscall_page), PAGE_READONLY_EXEC);
printk("Compat vDSO mapped to %08lx.\n", __fix_to_virt(FIX_VDSO));
#endif
......
......@@ -131,15 +131,13 @@ unsigned long profile_pc(struct pt_regs *regs)
unsigned long pc = instruction_pointer(regs);
#ifdef CONFIG_SMP
if (!user_mode_vm(regs) && in_lock_functions(pc)) {
if (!v8086_mode(regs) && SEGMENT_IS_KERNEL_CODE(regs->xcs) &&
in_lock_functions(pc)) {
#ifdef CONFIG_FRAME_POINTER
return *(unsigned long *)(regs->ebp + 4);
#else
unsigned long *sp;
if ((regs->xcs & 3) == 0)
sp = (unsigned long *)&regs->esp;
else
sp = (unsigned long *)regs->esp;
unsigned long *sp = (unsigned long *)&regs->esp;
/* Return address is either directly at stack pointer
or above a saved eflags. Eflags has bits 22-31 zero,
kernel addresses don't. */
......@@ -232,6 +230,7 @@ EXPORT_SYMBOL(get_cmos_time);
static void sync_cmos_clock(unsigned long dummy);
static DEFINE_TIMER(sync_cmos_timer, sync_cmos_clock, 0, 0);
int no_sync_cmos_clock;
static void sync_cmos_clock(unsigned long dummy)
{
......@@ -275,6 +274,7 @@ static void sync_cmos_clock(unsigned long dummy)
void notify_arch_cmos_timer(void)
{
if (!no_sync_cmos_clock)
mod_timer(&sync_cmos_timer, jiffies + 1);
}
......
......@@ -94,6 +94,7 @@ asmlinkage void spurious_interrupt_bug(void);
asmlinkage void machine_check(void);
int kstack_depth_to_print = 24;
static unsigned int code_bytes = 64;
ATOMIC_NOTIFIER_HEAD(i386die_chain);
int register_die_notifier(struct notifier_block *nb)
......@@ -291,10 +292,11 @@ void show_registers(struct pt_regs *regs)
int i;
int in_kernel = 1;
unsigned long esp;
unsigned short ss;
unsigned short ss, gs;
esp = (unsigned long) (&regs->esp);
savesegment(ss, ss);
savesegment(gs, gs);
if (user_mode_vm(regs)) {
in_kernel = 0;
esp = regs->esp;
......@@ -313,8 +315,8 @@ void show_registers(struct pt_regs *regs)
regs->eax, regs->ebx, regs->ecx, regs->edx);
printk(KERN_EMERG "esi: %08lx edi: %08lx ebp: %08lx esp: %08lx\n",
regs->esi, regs->edi, regs->ebp, esp);
printk(KERN_EMERG "ds: %04x es: %04x ss: %04x\n",
regs->xds & 0xffff, regs->xes & 0xffff, ss);
printk(KERN_EMERG "ds: %04x es: %04x fs: %04x gs: %04x ss: %04x\n",
regs->xds & 0xffff, regs->xes & 0xffff, regs->xfs & 0xffff, gs, ss);
printk(KERN_EMERG "Process %.*s (pid: %d, ti=%p task=%p task.ti=%p)",
TASK_COMM_LEN, current->comm, current->pid,
current_thread_info(), current, current->thread_info);
......@@ -324,7 +326,8 @@ void show_registers(struct pt_regs *regs)
*/
if (in_kernel) {
u8 *eip;
int code_bytes = 64;
unsigned int code_prologue = code_bytes * 43 / 64;
unsigned int code_len = code_bytes;
unsigned char c;
printk("\n" KERN_EMERG "Stack: ");
......@@ -332,14 +335,14 @@ void show_registers(struct pt_regs *regs)
printk(KERN_EMERG "Code: ");
eip = (u8 *)regs->eip - 43;
eip = (u8 *)regs->eip - code_prologue;
if (eip < (u8 *)PAGE_OFFSET ||
probe_kernel_address(eip, c)) {
/* try starting at EIP */
eip = (u8 *)regs->eip;
code_bytes = 32;
code_len = code_len - code_prologue + 1;
}
for (i = 0; i < code_bytes; i++, eip++) {
for (i = 0; i < code_len; i++, eip++) {
if (eip < (u8 *)PAGE_OFFSET ||
probe_kernel_address(eip, c)) {
printk(" Bad EIP value.");
......@@ -1191,3 +1194,13 @@ static int __init kstack_setup(char *s)
return 1;
}
__setup("kstack=", kstack_setup);
static int __init code_bytes_setup(char *s)
{
code_bytes = simple_strtoul(s, NULL, 0);
if (code_bytes > 8192)
code_bytes = 8192;
return 1;
}
__setup("code_bytes=", code_bytes_setup);
......@@ -23,6 +23,7 @@
* an extra value to store the TSC freq
*/
unsigned int tsc_khz;
unsigned long long (*custom_sched_clock)(void);
int tsc_disable;
......@@ -107,14 +108,14 @@ unsigned long long sched_clock(void)
{
unsigned long long this_offset;
if (unlikely(custom_sched_clock))
return (*custom_sched_clock)();
/*
* in the NUMA case we dont use the TSC as they are not
* synchronized across all CPUs.
* Fall back to jiffies if there's no TSC available:
*/
#ifndef CONFIG_NUMA
if (!cpu_khz || check_tsc_unstable())
#endif
/* no locking but a rare wrong value is not a big deal */
if (unlikely(tsc_disable))
/* No locking but a rare wrong value is not a big deal: */
return (jiffies_64 - INITIAL_JIFFIES) * (1000000000 / HZ);
/* read the Time Stamp Counter: */
......@@ -194,13 +195,13 @@ EXPORT_SYMBOL(recalibrate_cpu_khz);
void __init tsc_init(void)
{
if (!cpu_has_tsc || tsc_disable)
return;
goto out_no_tsc;
cpu_khz = calculate_cpu_khz();
tsc_khz = cpu_khz;
if (!cpu_khz)
return;
goto out_no_tsc;
printk("Detected %lu.%03lu MHz processor.\n",
(unsigned long)cpu_khz / 1000,
......@@ -208,6 +209,15 @@ void __init tsc_init(void)
set_cyc2ns_scale(cpu_khz);
use_tsc_delay();
return;
out_no_tsc:
/*
* Set the tsc_disable flag if there's no TSC support, this
* makes it a fast flag for the kernel to see whether it
* should be using the TSC.
*/
tsc_disable = 1;
}
#ifdef CONFIG_CPU_FREQ
......
......@@ -96,12 +96,12 @@ static int copy_vm86_regs_to_user(struct vm86_regs __user *user,
{
int ret = 0;
/* kernel_vm86_regs is missing xfs, so copy everything up to
(but not including) xgs, and then rest after xgs. */
ret += copy_to_user(user, regs, offsetof(struct kernel_vm86_regs, pt.xgs));
ret += copy_to_user(&user->__null_gs, &regs->pt.xgs,
/* kernel_vm86_regs is missing xgs, so copy everything up to
(but not including) orig_eax, and then rest including orig_eax. */
ret += copy_to_user(user, regs, offsetof(struct kernel_vm86_regs, pt.orig_eax));
ret += copy_to_user(&user->orig_eax, &regs->pt.orig_eax,
sizeof(struct kernel_vm86_regs) -
offsetof(struct kernel_vm86_regs, pt.xgs));
offsetof(struct kernel_vm86_regs, pt.orig_eax));
return ret;
}
......@@ -113,12 +113,13 @@ static int copy_vm86_regs_from_user(struct kernel_vm86_regs *regs,
{
int ret = 0;
ret += copy_from_user(regs, user, offsetof(struct kernel_vm86_regs, pt.xgs));
ret += copy_from_user(&regs->pt.xgs, &user->__null_gs,
/* copy eax-xfs inclusive */
ret += copy_from_user(regs, user, offsetof(struct kernel_vm86_regs, pt.orig_eax));
/* copy orig_eax-__gsh+extra */
ret += copy_from_user(&regs->pt.orig_eax, &user->orig_eax,
sizeof(struct kernel_vm86_regs) -
offsetof(struct kernel_vm86_regs, pt.xgs) +
offsetof(struct kernel_vm86_regs, pt.orig_eax) +
extra);
return ret;
}
......@@ -157,8 +158,8 @@ struct pt_regs * fastcall save_v86_state(struct kernel_vm86_regs * regs)
ret = KVM86->regs32;
loadsegment(fs, current->thread.saved_fs);
ret->xgs = current->thread.saved_gs;
ret->xfs = current->thread.saved_fs;
loadsegment(gs, current->thread.saved_gs);
return ret;
}
......@@ -285,9 +286,9 @@ static void do_sys_vm86(struct kernel_vm86_struct *info, struct task_struct *tsk
*/
info->regs.pt.xds = 0;
info->regs.pt.xes = 0;
info->regs.pt.xgs = 0;
info->regs.pt.xfs = 0;
/* we are clearing fs later just before "jmp resume_userspace",
/* we are clearing gs later just before "jmp resume_userspace",
* because it is not saved/restored.
*/
......@@ -321,8 +322,8 @@ static void do_sys_vm86(struct kernel_vm86_struct *info, struct task_struct *tsk
*/
info->regs32->eax = 0;
tsk->thread.saved_esp0 = tsk->thread.esp0;
savesegment(fs, tsk->thread.saved_fs);
tsk->thread.saved_gs = info->regs32->xgs;
tsk->thread.saved_fs = info->regs32->xfs;
savesegment(gs, tsk->thread.saved_gs);
tss = &per_cpu(init_tss, get_cpu());
tsk->thread.esp0 = (unsigned long) &info->VM86_TSS_ESP0;
......@@ -342,7 +343,7 @@ static void do_sys_vm86(struct kernel_vm86_struct *info, struct task_struct *tsk
__asm__ __volatile__(
"movl %0,%%esp\n\t"
"movl %1,%%ebp\n\t"
"mov %2, %%fs\n\t"
"mov %2, %%gs\n\t"
"jmp resume_userspace"
: /* no outputs */
:"r" (&info->regs), "r" (task_thread_info(tsk)), "r" (0));
......
This diff is collapsed.
This diff is collapsed.
......@@ -37,9 +37,14 @@ SECTIONS
{
. = LOAD_OFFSET + LOAD_PHYSICAL_ADDR;
phys_startup_32 = startup_32 - LOAD_OFFSET;
.text.head : AT(ADDR(.text.head) - LOAD_OFFSET) {
_text = .; /* Text and read-only data */
*(.text.head)
} :text = 0x9090
/* read-only */
.text : AT(ADDR(.text) - LOAD_OFFSET) {
_text = .; /* Text and read-only data */
*(.text)
SCHED_TEXT
LOCK_TEXT
......
......@@ -56,15 +56,14 @@ static int reg_offset_vm86[] = {
#define VM86_REG_(x) (*(unsigned short *) \
(reg_offset_vm86[((unsigned)x)]+(u_char *) FPU_info))
/* These are dummy, fs and gs are not saved on the stack. */
#define ___FS ___ds
/* This dummy, gs is not saved on the stack. */
#define ___GS ___ds
static int reg_offset_pm[] = {
offsetof(struct info,___cs),
offsetof(struct info,___ds),
offsetof(struct info,___es),
offsetof(struct info,___FS),
offsetof(struct info,___fs),
offsetof(struct info,___GS),
offsetof(struct info,___ss),
offsetof(struct info,___ds)
......@@ -169,13 +168,10 @@ static long pm_address(u_char FPU_modrm, u_char segment,
switch ( segment )
{
/* fs and gs aren't used by the kernel, so they still have their
user-space values. */
case PREFIX_FS_-1:
/* N.B. - movl %seg, mem is a 2 byte write regardless of prefix */
savesegment(fs, addr->selector);
break;
/* gs isn't used by the kernel, so it still has its
user-space value. */
case PREFIX_GS_-1:
/* N.B. - movl %seg, mem is a 2 byte write regardless of prefix */
savesegment(gs, addr->selector);
break;
default:
......
......@@ -48,9 +48,11 @@
#define status_word() \
((partial_status & ~SW_Top & 0xffff) | ((top << SW_Top_Shift) & SW_Top))
#define setcc(cc) ({ \
partial_status &= ~(SW_C0|SW_C1|SW_C2|SW_C3); \
partial_status |= (cc) & (SW_C0|SW_C1|SW_C2|SW_C3); })
static inline void setcc(int cc)
{
partial_status &= ~(SW_C0|SW_C1|SW_C2|SW_C3);
partial_status |= (cc) & (SW_C0|SW_C1|SW_C2|SW_C3);
}
#ifdef PECULIAR_486
/* Default, this conveys no information, but an 80486 does it. */
......
......@@ -101,7 +101,6 @@ extern void find_max_pfn(void);
extern void add_one_highpage_init(struct page *, int, int);
extern struct e820map e820;
extern unsigned long init_pg_tables_end;
extern unsigned long highend_pfn, highstart_pfn;
extern unsigned long max_low_pfn;
extern unsigned long totalram_pages;
......
......@@ -46,17 +46,17 @@ int unregister_page_fault_notifier(struct notifier_block *nb)
}
EXPORT_SYMBOL_GPL(unregister_page_fault_notifier);
static inline int notify_page_fault(enum die_val val, const char *str,
struct pt_regs *regs, long err, int trap, int sig)
static inline int notify_page_fault(struct pt_regs *regs, long err)
{
struct die_args args = {
.regs = regs,
.str = str,
.str = "page fault",
.err = err,
.trapnr = trap,
.signr = sig
.trapnr = 14,
.signr = SIGSEGV
};
return atomic_notifier_call_chain(&notify_page_fault_chain, val, &args);
return atomic_notifier_call_chain(&notify_page_fault_chain,
DIE_PAGE_FAULT, &args);
}
/*
......@@ -327,8 +327,7 @@ fastcall void __kprobes do_page_fault(struct pt_regs *regs,
if (unlikely(address >= TASK_SIZE)) {
if (!(error_code & 0x0000000d) && vmalloc_fault(address) >= 0)
return;
if (notify_page_fault(DIE_PAGE_FAULT, "page fault", regs, error_code, 14,
SIGSEGV) == NOTIFY_STOP)
if (notify_page_fault(regs, error_code) == NOTIFY_STOP)
return;
/*
* Don't take the mm semaphore here. If we fixup a prefetch
......@@ -337,8 +336,7 @@ fastcall void __kprobes do_page_fault(struct pt_regs *regs,
goto bad_area_nosemaphore;
}
if (notify_page_fault(DIE_PAGE_FAULT, "page fault", regs, error_code, 14,
SIGSEGV) == NOTIFY_STOP)
if (notify_page_fault(regs, error_code) == NOTIFY_STOP)
return;
/* It's safe to allow irq's after cr2 has been saved and the vmalloc
......
......@@ -62,6 +62,7 @@ static pmd_t * __init one_md_table_init(pgd_t *pgd)
#ifdef CONFIG_X86_PAE
pmd_table = (pmd_t *) alloc_bootmem_low_pages(PAGE_SIZE);
paravirt_alloc_pd(__pa(pmd_table) >> PAGE_SHIFT);
set_pgd(pgd, __pgd(__pa(pmd_table) | _PAGE_PRESENT));
pud = pud_offset(pgd, 0);
if (pmd_table != pmd_offset(pud, 0))
......@@ -82,6 +83,7 @@ static pte_t * __init one_page_table_init(pmd_t *pmd)
{
if (pmd_none(*pmd)) {
pte_t *page_table = (pte_t *) alloc_bootmem_low_pages(PAGE_SIZE);
paravirt_alloc_pt(__pa(page_table) >> PAGE_SHIFT);
set_pmd(pmd, __pmd(__pa(page_table) | _PAGE_TABLE));
if (page_table != pte_offset_kernel(pmd, 0))
BUG();
......@@ -345,6 +347,8 @@ static void __init pagetable_init (void)
/* Init entries of the first-level page table to the zero page */
for (i = 0; i < PTRS_PER_PGD; i++)
set_pgd(pgd_base + i, __pgd(__pa(empty_zero_page) | _PAGE_PRESENT));
#else
paravirt_alloc_pd(__pa(swapper_pg_dir) >> PAGE_SHIFT);
#endif
/* Enable PSE if available */
......
......@@ -60,6 +60,7 @@ static struct page *split_large_page(unsigned long address, pgprot_t prot,
address = __pa(address);
addr = address & LARGE_PAGE_MASK;
pbase = (pte_t *)page_address(base);
paravirt_alloc_pt(page_to_pfn(base));
for (i = 0; i < PTRS_PER_PTE; i++, addr += PAGE_SIZE) {
set_pte(&pbase[i], pfn_pte(addr >> PAGE_SHIFT,
addr == address ? prot : ref_prot));
......@@ -172,6 +173,7 @@ __change_page_attr(struct page *page, pgprot_t prot)
if (!PageReserved(kpte_page)) {
if (cpu_has_pse && (page_private(kpte_page) == 0)) {
ClearPagePrivate(kpte_page);
paravirt_release_pt(page_to_pfn(kpte_page));
list_add(&kpte_page->lru, &df_list);
revert_page(kpte_page, address);
}
......
......@@ -171,6 +171,8 @@ void __set_fixmap (enum fixed_addresses idx, unsigned long phys, pgprot_t flags)
void reserve_top_address(unsigned long reserve)
{
BUG_ON(fixmaps > 0);
printk(KERN_INFO "Reserving virtual address space above 0x%08x\n",
(int)-reserve);
#ifdef CONFIG_COMPAT_VDSO
BUG_ON(reserve != 0);
#else
......@@ -248,9 +250,15 @@ void pgd_ctor(void *pgd, struct kmem_cache *cache, unsigned long unused)
clone_pgd_range((pgd_t *)pgd + USER_PTRS_PER_PGD,
swapper_pg_dir + USER_PTRS_PER_PGD,
KERNEL_PGD_PTRS);
if (PTRS_PER_PMD > 1)
return;
/* must happen under lock */
paravirt_alloc_pd_clone(__pa(pgd) >> PAGE_SHIFT,
__pa(swapper_pg_dir) >> PAGE_SHIFT,
USER_PTRS_PER_PGD, PTRS_PER_PGD - USER_PTRS_PER_PGD);
pgd_list_add(pgd);
spin_unlock_irqrestore(&pgd_lock, flags);
}
......@@ -260,6 +268,7 @@ void pgd_dtor(void *pgd, struct kmem_cache *cache, unsigned long unused)
{
unsigned long flags; /* can be called from interrupt context */
paravirt_release_pd(__pa(pgd) >> PAGE_SHIFT);
spin_lock_irqsave(&pgd_lock, flags);
pgd_list_del(pgd);
spin_unlock_irqrestore(&pgd_lock, flags);
......@@ -277,13 +286,18 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
pmd_t *pmd = kmem_cache_alloc(pmd_cache, GFP_KERNEL);
if (!pmd)
goto out_oom;
paravirt_alloc_pd(__pa(pmd) >> PAGE_SHIFT);
set_pgd(&pgd[i], __pgd(1 + __pa(pmd)));
}
return pgd;
out_oom:
for (i--; i >= 0; i--)
kmem_cache_free(pmd_cache, (void *)__va(pgd_val(pgd[i])-1));
for (i--; i >= 0; i--) {
pgd_t pgdent = pgd[i];
void* pmd = (void *)__va(pgd_val(pgdent)-1);
paravirt_release_pd(__pa(pmd) >> PAGE_SHIFT);
kmem_cache_free(pmd_cache, pmd);
}
kmem_cache_free(pgd_cache, pgd);
return NULL;
}
......@@ -294,8 +308,12 @@ void pgd_free(pgd_t *pgd)
/* in the PAE case user pgd entries are overwritten before usage */
if (PTRS_PER_PMD > 1)
for (i = 0; i < USER_PTRS_PER_PGD; ++i)
kmem_cache_free(pmd_cache, (void *)__va(pgd_val(pgd[i])-1));
for (i = 0; i < USER_PTRS_PER_PGD; ++i) {
pgd_t pgdent = pgd[i];
void* pmd = (void *)__va(pgd_val(pgdent)-1);
paravirt_release_pd(__pa(pmd) >> PAGE_SHIFT);
kmem_cache_free(pmd_cache, pmd);
}
/* in the non-PAE case, free_pgtables() clears user pgd entries */
kmem_cache_free(pgd_cache, pgd);
}
......@@ -24,7 +24,8 @@
#define CTR_IS_RESERVED(msrs,c) (msrs->counters[(c)].addr ? 1 : 0)
#define CTR_READ(l,h,msrs,c) do {rdmsr(msrs->counters[(c)].addr, (l), (h));} while (0)
#define CTR_WRITE(l,msrs,c) do {wrmsr(msrs->counters[(c)].addr, -(u32)(l), -1);} while (0)
#define CTR_32BIT_WRITE(l,msrs,c) \
do {wrmsr(msrs->counters[(c)].addr, -(u32)(l), 0);} while (0)
#define CTR_OVERFLOWED(n) (!((n) & (1U<<31)))
#define CTRL_IS_RESERVED(msrs,c) (msrs->controls[(c)].addr ? 1 : 0)
......@@ -79,7 +80,7 @@ static void ppro_setup_ctrs(struct op_msrs const * const msrs)
for (i = 0; i < NUM_COUNTERS; ++i) {
if (unlikely(!CTR_IS_RESERVED(msrs,i)))
continue;
CTR_WRITE(1, msrs, i);
CTR_32BIT_WRITE(1, msrs, i);
}
/* enable active counters */
......@@ -87,7 +88,7 @@ static void ppro_setup_ctrs(struct op_msrs const * const msrs)
if ((counter_config[i].enabled) && (CTR_IS_RESERVED(msrs,i))) {
reset_value[i] = counter_config[i].count;
CTR_WRITE(counter_config[i].count, msrs, i);
CTR_32BIT_WRITE(counter_config[i].count, msrs, i);
CTRL_READ(low, high, msrs, i);
CTRL_CLEAR(low);
......@@ -116,7 +117,7 @@ static int ppro_check_ctrs(struct pt_regs * const regs,
CTR_READ(low, high, msrs, i);
if (CTR_OVERFLOWED(low)) {
oprofile_add_sample(regs, i);
CTR_WRITE(reset_value[i], msrs, i);
CTR_32BIT_WRITE(reset_value[i], msrs, i);
}
}
......
obj-y := i386.o init.o
obj-$(CONFIG_PCI_BIOS) += pcbios.o
obj-$(CONFIG_PCI_MMCONFIG) += mmconfig.o direct.o
obj-$(CONFIG_PCI_MMCONFIG) += mmconfig.o direct.o mmconfig-shared.o
obj-$(CONFIG_PCI_DIRECT) += direct.o
pci-y := fixup.o
......
/*
* mmconfig-shared.c - Low-level direct PCI config space access via
* MMCONFIG - common code between i386 and x86-64.
*
* This code does:
* - known chipset handling
* - ACPI decoding and validation
*
* Per-architecture code takes care of the mappings and accesses
* themselves.
*/
#include <linux/pci.h>
#include <linux/init.h>
#include <linux/acpi.h>
#include <linux/bitmap.h>
#include <asm/e820.h>
#include "pci.h"
/* aperture is up to 256MB but BIOS may reserve less */
#define MMCONFIG_APER_MIN (2 * 1024*1024)
#define MMCONFIG_APER_MAX (256 * 1024*1024)
DECLARE_BITMAP(pci_mmcfg_fallback_slots, 32*PCI_MMCFG_MAX_CHECK_BUS);
/* K8 systems have some devices (typically in the builtin northbridge)
that are only accessible using type1
Normally this can be expressed in the MCFG by not listing them
and assigning suitable _SEGs, but this isn't implemented in some BIOS.
Instead try to discover all devices on bus 0 that are unreachable using MM
and fallback for them. */
static void __init unreachable_devices(void)
{
int i, bus;
/* Use the max bus number from ACPI here? */
for (bus = 0; bus < PCI_MMCFG_MAX_CHECK_BUS; bus++) {
for (i = 0; i < 32; i++) {
unsigned int devfn = PCI_DEVFN(i, 0);
u32 val1, val2;
pci_conf1_read(0, bus, devfn, 0, 4, &val1);
if (val1 == 0xffffffff)
continue;
if (pci_mmcfg_arch_reachable(0, bus, devfn)) {
raw_pci_ops->read(0, bus, devfn, 0, 4, &val2);
if (val1 == val2)
continue;
}
set_bit(i + 32 * bus, pci_mmcfg_fallback_slots);
printk(KERN_NOTICE "PCI: No mmconfig possible on device"
" %02x:%02x\n", bus, i);
}
}
}
static const char __init *pci_mmcfg_e7520(void)
{
u32 win;
pci_conf1_read(0, 0, PCI_DEVFN(0,0), 0xce, 2, &win);
pci_mmcfg_config_num = 1;
pci_mmcfg_config = kzalloc(sizeof(pci_mmcfg_config[0]), GFP_KERNEL);
if (!pci_mmcfg_config)
return NULL;
pci_mmcfg_config[0].address = (win & 0xf000) << 16;
pci_mmcfg_config[0].pci_segment = 0;
pci_mmcfg_config[0].start_bus_number = 0;
pci_mmcfg_config[0].end_bus_number = 255;
return "Intel Corporation E7520 Memory Controller Hub";
}
static const char __init *pci_mmcfg_intel_945(void)
{
u32 pciexbar, mask = 0, len = 0;
pci_mmcfg_config_num = 1;
pci_conf1_read(0, 0, PCI_DEVFN(0,0), 0x48, 4, &pciexbar);
/* Enable bit */
if (!(pciexbar & 1))
pci_mmcfg_config_num = 0;
/* Size bits */
switch ((pciexbar >> 1) & 3) {
case 0:
mask = 0xf0000000U;
len = 0x10000000U;
break;
case 1:
mask = 0xf8000000U;
len = 0x08000000U;
break;
case 2:
mask = 0xfc000000U;
len = 0x04000000U;
break;
default:
pci_mmcfg_config_num = 0;
}
/* Errata #2, things break when not aligned on a 256Mb boundary */
/* Can only happen in 64M/128M mode */
if ((pciexbar & mask) & 0x0fffffffU)
pci_mmcfg_config_num = 0;
if (pci_mmcfg_config_num) {
pci_mmcfg_config = kzalloc(sizeof(pci_mmcfg_config[0]), GFP_KERNEL);
if (!pci_mmcfg_config)
return NULL;
pci_mmcfg_config[0].address = pciexbar & mask;
pci_mmcfg_config[0].pci_segment = 0;
pci_mmcfg_config[0].start_bus_number = 0;
pci_mmcfg_config[0].end_bus_number = (len >> 20) - 1;
}
return "Intel Corporation 945G/GZ/P/PL Express Memory Controller Hub";
}
struct pci_mmcfg_hostbridge_probe {
u32 vendor;
u32 device;
const char *(*probe)(void);
};
static struct pci_mmcfg_hostbridge_probe pci_mmcfg_probes[] __initdata = {
{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7520_MCH, pci_mmcfg_e7520 },
{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82945G_HB, pci_mmcfg_intel_945 },
};
static int __init pci_mmcfg_check_hostbridge(void)
{
u32 l;
u16 vendor, device;
int i;
const char *name;
pci_conf1_read(0, 0, PCI_DEVFN(0,0), 0, 4, &l);
vendor = l & 0xffff;
device = (l >> 16) & 0xffff;
pci_mmcfg_config_num = 0;
pci_mmcfg_config = NULL;
name = NULL;
for (i = 0; !name && i < ARRAY_SIZE(pci_mmcfg_probes); i++) {
if (pci_mmcfg_probes[i].vendor == vendor &&
pci_mmcfg_probes[i].device == device)
name = pci_mmcfg_probes[i].probe();
}
if (name) {
printk(KERN_INFO "PCI: Found %s %s MMCONFIG support.\n",
name, pci_mmcfg_config_num ? "with" : "without");
}
return name != NULL;
}
static void __init pci_mmcfg_insert_resources(void)
{
#define PCI_MMCFG_RESOURCE_NAME_LEN 19
int i;
struct resource *res;
char *names;
unsigned num_buses;
res = kcalloc(PCI_MMCFG_RESOURCE_NAME_LEN + sizeof(*res),
pci_mmcfg_config_num, GFP_KERNEL);
if (!res) {
printk(KERN_ERR "PCI: Unable to allocate MMCONFIG resources\n");
return;
}
names = (void *)&res[pci_mmcfg_config_num];
for (i = 0; i < pci_mmcfg_config_num; i++, res++) {
struct acpi_mcfg_allocation *cfg = &pci_mmcfg_config[i];
num_buses = cfg->end_bus_number - cfg->start_bus_number + 1;
res->name = names;
snprintf(names, PCI_MMCFG_RESOURCE_NAME_LEN, "PCI MMCONFIG %u",
cfg->pci_segment);
res->start = cfg->address;
res->end = res->start + (num_buses << 20) - 1;
res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
insert_resource(&iomem_resource, res);
names += PCI_MMCFG_RESOURCE_NAME_LEN;
}
}
static void __init pci_mmcfg_reject_broken(int type)
{
typeof(pci_mmcfg_config[0]) *cfg;
if ((pci_mmcfg_config_num == 0) ||
(pci_mmcfg_config == NULL) ||
(pci_mmcfg_config[0].address == 0))
return;
cfg = &pci_mmcfg_config[0];
/*
* Handle more broken MCFG tables on Asus etc.
* They only contain a single entry for bus 0-0.
*/
if (pci_mmcfg_config_num == 1 &&
cfg->pci_segment == 0 &&
(cfg->start_bus_number | cfg->end_bus_number) == 0) {
printk(KERN_ERR "PCI: start and end of bus number is 0. "
"Rejected as broken MCFG.\n");
goto reject;
}
/*
* Only do this check when type 1 works. If it doesn't work
* assume we run on a Mac and always use MCFG
*/
if (type == 1 && !e820_all_mapped(cfg->address,
cfg->address + MMCONFIG_APER_MIN,
E820_RESERVED)) {
printk(KERN_ERR "PCI: BIOS Bug: MCFG area at %Lx is not"
" E820-reserved\n", cfg->address);
goto reject;
}
return;
reject:
printk(KERN_ERR "PCI: Not using MMCONFIG.\n");
kfree(pci_mmcfg_config);
pci_mmcfg_config = NULL;
pci_mmcfg_config_num = 0;
}
void __init pci_mmcfg_init(int type)
{
int known_bridge = 0;
if ((pci_probe & PCI_PROBE_MMCONF) == 0)
return;
if (type == 1 && pci_mmcfg_check_hostbridge())
known_bridge = 1;
if (!known_bridge) {
acpi_table_parse(ACPI_SIG_MCFG, acpi_parse_mcfg);
pci_mmcfg_reject_broken(type);
}
if ((pci_mmcfg_config_num == 0) ||
(pci_mmcfg_config == NULL) ||
(pci_mmcfg_config[0].address == 0))
return;
if (pci_mmcfg_arch_init()) {
if (type == 1)
unreachable_devices();
if (known_bridge)
pci_mmcfg_insert_resources();
pci_probe = (pci_probe & ~PCI_PROBE_MASK) | PCI_PROBE_MMCONF;
}
}
This diff is collapsed.
......@@ -94,3 +94,13 @@ extern void pci_pcbios_init(void);
extern void pci_mmcfg_init(int type);
extern void pcibios_sort(void);
/* pci-mmconfig.c */
/* Verify the first 16 busses. We assume that systems with more busses
get MCFG right. */
#define PCI_MMCFG_MAX_CHECK_BUS 16
extern DECLARE_BITMAP(pci_mmcfg_fallback_slots, 32*PCI_MMCFG_MAX_CHECK_BUS);
extern int __init pci_mmcfg_arch_reachable(unsigned int seg, unsigned int bus,
unsigned int devfn);
extern int __init pci_mmcfg_arch_init(void);
......@@ -152,18 +152,18 @@ config MPSC
Optimize for Intel Pentium 4 and older Nocona/Dempsey Xeon CPUs
with Intel Extended Memory 64 Technology(EM64T). For details see
<http://www.intel.com/technology/64bitextensions/>.
Note the the latest Xeons (Xeon 51xx and 53xx) are not based on the
Netburst core and shouldn't use this option. You can distingush them
Note that the latest Xeons (Xeon 51xx and 53xx) are not based on the
Netburst core and shouldn't use this option. You can distinguish them
using the cpu family field
in /proc/cpuinfo. Family 15 is a older Xeon, Family 6 a newer one
(this rule only applies to system that support EM64T)
in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one
(this rule only applies to systems that support EM64T)
config MCORE2
bool "Intel Core2 / newer Xeon"
help
Optimize for Intel Core2 and newer Xeons (51xx)
You can distingush the newer Xeons from the older ones using
the cpu family field in /proc/cpuinfo. 15 is a older Xeon
You can distinguish the newer Xeons from the older ones using
the cpu family field in /proc/cpuinfo. 15 is an older Xeon
(use CONFIG_MPSC then), 6 is a newer one. This rule only
applies to CPUs that support EM64T.
......@@ -458,8 +458,8 @@ config IOMMU
on systems with more than 3GB. This is usually needed for USB,
sound, many IDE/SATA chipsets and some other devices.
Provides a driver for the AMD Athlon64/Opteron/Turion/Sempron GART
based IOMMU and a software bounce buffer based IOMMU used on Intel
systems and as fallback.
based hardware IOMMU and a software bounce buffer based IOMMU used
on Intel systems and as fallback.
The code is only active when needed (enough memory and limited
device) unless CONFIG_IOMMU_DEBUG or iommu=force is specified
too.
......@@ -496,6 +496,12 @@ config CALGARY_IOMMU_ENABLED_BY_DEFAULT
# need this always selected by IOMMU for the VIA workaround
config SWIOTLB
bool
help
Support for software bounce buffers used on x86-64 systems
which don't have a hardware IOMMU (e.g. the current generation
of Intel's x86-64 CPUs). Using this PCI devices which can only
access 32-bits of memory can be used on systems with more than
3 GB of memory. If unsure, say Y.
config X86_MCE
bool "Machine check support" if EMBEDDED
......
This diff is collapsed.
......@@ -21,6 +21,7 @@
#include <linux/stddef.h>
#include <linux/personality.h>
#include <linux/compat.h>
#include <linux/binfmts.h>
#include <asm/ucontext.h>
#include <asm/uaccess.h>
#include <asm/i387.h>
......@@ -449,7 +450,11 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
/* Return stub is in 32bit vsyscall page */
{
void __user *restorer = VSYSCALL32_SIGRETURN;
void __user *restorer;
if (current->binfmt->hasvdso)
restorer = VSYSCALL32_SIGRETURN;
else
restorer = (void *)&frame->retcode;
if (ka->sa.sa_flags & SA_RESTORER)
restorer = ka->sa.sa_restorer;
err |= __put_user(ptr_to_compat(restorer), &frame->pretcode);
......@@ -495,7 +500,7 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
ptrace_notify(SIGTRAP);
#if DEBUG_SIG
printk("SIG deliver (%s:%d): sp=%p pc=%p ra=%p\n",
printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%u\n",
current->comm, current->pid, frame, regs->rip, frame->pretcode);
#endif
......@@ -601,7 +606,7 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
ptrace_notify(SIGTRAP);
#if DEBUG_SIG
printk("SIG deliver (%s:%d): sp=%p pc=%p ra=%p\n",
printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%u\n",
current->comm, current->pid, frame, regs->rip, frame->pretcode);
#endif
......
......@@ -718,4 +718,5 @@ ia32_sys_call_table:
.quad compat_sys_vmsplice
.quad compat_sys_move_pages
.quad sys_getcpu
.quad sys_epoll_pwait
ia32_syscall_end:
......@@ -43,6 +43,7 @@ obj-$(CONFIG_PCI) += early-quirks.o
obj-y += topology.o
obj-y += intel_cacheinfo.o
obj-y += pcspeaker.o
CFLAGS_vsyscall.o := $(PROFILING) -g0
......@@ -56,3 +57,4 @@ quirks-y += ../../i386/kernel/quirks.o
i8237-y += ../../i386/kernel/i8237.o
msr-$(subst m,y,$(CONFIG_X86_MSR)) += ../../i386/kernel/msr.o
alternative-y += ../../i386/kernel/alternative.o
pcspeaker-y += ../../i386/kernel/pcspeaker.o
......@@ -58,7 +58,7 @@ unsigned long acpi_wakeup_address = 0;
unsigned long acpi_video_flags;
extern char wakeup_start, wakeup_end;
extern unsigned long FASTCALL(acpi_copy_wakeup_routine(unsigned long));
extern unsigned long acpi_copy_wakeup_routine(unsigned long);
static pgd_t low_ptr;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -114,6 +114,6 @@ asmlinkage long sys_iopl(unsigned int level, struct pt_regs *regs)
if (!capable(CAP_SYS_RAWIO))
return -EPERM;
}
regs->eflags = (regs->eflags &~ 0x3000UL) | (level << 12);
regs->eflags = (regs->eflags &~ X86_EFLAGS_IOPL) | (level << 12);
return 0;
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment