Commit 5e83f6fb authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'kvm-updates/2.6.36' of git://git.kernel.org/pub/scm/virt/kvm/kvm

* 'kvm-updates/2.6.36' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (198 commits)
  KVM: VMX: Fix host GDT.LIMIT corruption
  KVM: MMU: using __xchg_spte more smarter
  KVM: MMU: cleanup spte set and accssed/dirty tracking
  KVM: MMU: don't atomicly set spte if it's not present
  KVM: MMU: fix page dirty tracking lost while sync page
  KVM: MMU: fix broken page accessed tracking with ept enabled
  KVM: MMU: add missing reserved bits check in speculative path
  KVM: MMU: fix mmu notifier invalidate handler for huge spte
  KVM: x86 emulator: fix xchg instruction emulation
  KVM: x86: Call mask notifiers from pic
  KVM: x86: never re-execute instruction with enabled tdp
  KVM: Document KVM_GET_SUPPORTED_CPUID2 ioctl
  KVM: x86: emulator: inc/dec can have lock prefix
  KVM: MMU: Eliminate redundant temporaries in FNAME(fetch)
  KVM: MMU: Validate all gptes during fetch, not just those used for new pages
  KVM: MMU: Simplify spte fetch() function
  KVM: MMU: Add gpte_valid() helper
  KVM: MMU: Add validate_direct_spte() helper
  KVM: MMU: Add drop_large_spte() helper
  KVM: MMU: Use __set_spte to link shadow pages
  ...
parents fe445c6e 3444d7da
...@@ -487,17 +487,6 @@ Who: Jan Kiszka <jan.kiszka@web.de> ...@@ -487,17 +487,6 @@ Who: Jan Kiszka <jan.kiszka@web.de>
---------------------------- ----------------------------
What: KVM memory aliases support
When: July 2010
Why: Memory aliasing support is used for speeding up guest vga access
through the vga windows.
Modern userspace no longer uses this feature, so it's just bitrotted
code and can be removed with no impact.
Who: Avi Kivity <avi@redhat.com>
----------------------------
What: xtime, wall_to_monotonic What: xtime, wall_to_monotonic
When: 2.6.36+ When: 2.6.36+
Files: kernel/time/timekeeping.c include/linux/time.h Files: kernel/time/timekeeping.c include/linux/time.h
...@@ -508,16 +497,6 @@ Who: John Stultz <johnstul@us.ibm.com> ...@@ -508,16 +497,6 @@ Who: John Stultz <johnstul@us.ibm.com>
---------------------------- ----------------------------
What: KVM kernel-allocated memory slots
When: July 2010
Why: Since 2.6.25, kvm supports user-allocated memory slots, which are
much more flexible than kernel-allocated slots. All current userspace
supports the newer interface and this code can be removed with no
impact.
Who: Avi Kivity <avi@redhat.com>
----------------------------
What: KVM paravirt mmu host support What: KVM paravirt mmu host support
When: January 2011 When: January 2011
Why: The paravirt mmu host support is slower than non-paravirt mmu, both Why: The paravirt mmu host support is slower than non-paravirt mmu, both
......
...@@ -126,6 +126,10 @@ user fills in the size of the indices array in nmsrs, and in return ...@@ -126,6 +126,10 @@ user fills in the size of the indices array in nmsrs, and in return
kvm adjusts nmsrs to reflect the actual number of msrs and fills in kvm adjusts nmsrs to reflect the actual number of msrs and fills in
the indices array with their numbers. the indices array with their numbers.
Note: if kvm indicates supports MCE (KVM_CAP_MCE), then the MCE bank MSRs are
not returned in the MSR list, as different vcpus can have a different number
of banks, as set via the KVM_X86_SETUP_MCE ioctl.
4.4 KVM_CHECK_EXTENSION 4.4 KVM_CHECK_EXTENSION
Capability: basic Capability: basic
...@@ -160,29 +164,7 @@ Type: vm ioctl ...@@ -160,29 +164,7 @@ Type: vm ioctl
Parameters: struct kvm_memory_region (in) Parameters: struct kvm_memory_region (in)
Returns: 0 on success, -1 on error Returns: 0 on success, -1 on error
struct kvm_memory_region { This ioctl is obsolete and has been removed.
__u32 slot;
__u32 flags;
__u64 guest_phys_addr;
__u64 memory_size; /* bytes */
};
/* for kvm_memory_region::flags */
#define KVM_MEM_LOG_DIRTY_PAGES 1UL
This ioctl allows the user to create or modify a guest physical memory
slot. When changing an existing slot, it may be moved in the guest
physical memory space, or its flags may be modified. It may not be
resized. Slots may not overlap.
The flags field supports just one flag, KVM_MEM_LOG_DIRTY_PAGES, which
instructs kvm to keep track of writes to memory within the slot. See
the KVM_GET_DIRTY_LOG ioctl.
It is recommended to use the KVM_SET_USER_MEMORY_REGION ioctl instead
of this API, if available. This newer API allows placing guest memory
at specified locations in the host address space, yielding better
control and easy access.
4.6 KVM_CREATE_VCPU 4.6 KVM_CREATE_VCPU
...@@ -226,17 +208,7 @@ Type: vm ioctl ...@@ -226,17 +208,7 @@ Type: vm ioctl
Parameters: struct kvm_memory_alias (in) Parameters: struct kvm_memory_alias (in)
Returns: 0 (success), -1 (error) Returns: 0 (success), -1 (error)
struct kvm_memory_alias { This ioctl is obsolete and has been removed.
__u32 slot; /* this has a different namespace than memory slots */
__u32 flags;
__u64 guest_phys_addr;
__u64 memory_size;
__u64 target_phys_addr;
};
Defines a guest physical address space region as an alias to another
region. Useful for aliased address, for example the VGA low memory
window. Should not be used with userspace memory.
4.9 KVM_RUN 4.9 KVM_RUN
...@@ -892,6 +864,174 @@ arguments. ...@@ -892,6 +864,174 @@ arguments.
This ioctl is only useful after KVM_CREATE_IRQCHIP. Without an in-kernel This ioctl is only useful after KVM_CREATE_IRQCHIP. Without an in-kernel
irqchip, the multiprocessing state must be maintained by userspace. irqchip, the multiprocessing state must be maintained by userspace.
4.39 KVM_SET_IDENTITY_MAP_ADDR
Capability: KVM_CAP_SET_IDENTITY_MAP_ADDR
Architectures: x86
Type: vm ioctl
Parameters: unsigned long identity (in)
Returns: 0 on success, -1 on error
This ioctl defines the physical address of a one-page region in the guest
physical address space. The region must be within the first 4GB of the
guest physical address space and must not conflict with any memory slot
or any mmio address. The guest may malfunction if it accesses this memory
region.
This ioctl is required on Intel-based hosts. This is needed on Intel hardware
because of a quirk in the virtualization implementation (see the internals
documentation when it pops into existence).
4.40 KVM_SET_BOOT_CPU_ID
Capability: KVM_CAP_SET_BOOT_CPU_ID
Architectures: x86, ia64
Type: vm ioctl
Parameters: unsigned long vcpu_id
Returns: 0 on success, -1 on error
Define which vcpu is the Bootstrap Processor (BSP). Values are the same
as the vcpu id in KVM_CREATE_VCPU. If this ioctl is not called, the default
is vcpu 0.
4.41 KVM_GET_XSAVE
Capability: KVM_CAP_XSAVE
Architectures: x86
Type: vcpu ioctl
Parameters: struct kvm_xsave (out)
Returns: 0 on success, -1 on error
struct kvm_xsave {
__u32 region[1024];
};
This ioctl would copy current vcpu's xsave struct to the userspace.
4.42 KVM_SET_XSAVE
Capability: KVM_CAP_XSAVE
Architectures: x86
Type: vcpu ioctl
Parameters: struct kvm_xsave (in)
Returns: 0 on success, -1 on error
struct kvm_xsave {
__u32 region[1024];
};
This ioctl would copy userspace's xsave struct to the kernel.
4.43 KVM_GET_XCRS
Capability: KVM_CAP_XCRS
Architectures: x86
Type: vcpu ioctl
Parameters: struct kvm_xcrs (out)
Returns: 0 on success, -1 on error
struct kvm_xcr {
__u32 xcr;
__u32 reserved;
__u64 value;
};
struct kvm_xcrs {
__u32 nr_xcrs;
__u32 flags;
struct kvm_xcr xcrs[KVM_MAX_XCRS];
__u64 padding[16];
};
This ioctl would copy current vcpu's xcrs to the userspace.
4.44 KVM_SET_XCRS
Capability: KVM_CAP_XCRS
Architectures: x86
Type: vcpu ioctl
Parameters: struct kvm_xcrs (in)
Returns: 0 on success, -1 on error
struct kvm_xcr {
__u32 xcr;
__u32 reserved;
__u64 value;
};
struct kvm_xcrs {
__u32 nr_xcrs;
__u32 flags;
struct kvm_xcr xcrs[KVM_MAX_XCRS];
__u64 padding[16];
};
This ioctl would set vcpu's xcr to the value userspace specified.
4.45 KVM_GET_SUPPORTED_CPUID
Capability: KVM_CAP_EXT_CPUID
Architectures: x86
Type: system ioctl
Parameters: struct kvm_cpuid2 (in/out)
Returns: 0 on success, -1 on error
struct kvm_cpuid2 {
__u32 nent;
__u32 padding;
struct kvm_cpuid_entry2 entries[0];
};
#define KVM_CPUID_FLAG_SIGNIFCANT_INDEX 1
#define KVM_CPUID_FLAG_STATEFUL_FUNC 2
#define KVM_CPUID_FLAG_STATE_READ_NEXT 4
struct kvm_cpuid_entry2 {
__u32 function;
__u32 index;
__u32 flags;
__u32 eax;
__u32 ebx;
__u32 ecx;
__u32 edx;
__u32 padding[3];
};
This ioctl returns x86 cpuid features which are supported by both the hardware
and kvm. Userspace can use the information returned by this ioctl to
construct cpuid information (for KVM_SET_CPUID2) that is consistent with
hardware, kernel, and userspace capabilities, and with user requirements (for
example, the user may wish to constrain cpuid to emulate older hardware,
or for feature consistency across a cluster).
Userspace invokes KVM_GET_SUPPORTED_CPUID by passing a kvm_cpuid2 structure
with the 'nent' field indicating the number of entries in the variable-size
array 'entries'. If the number of entries is too low to describe the cpu
capabilities, an error (E2BIG) is returned. If the number is too high,
the 'nent' field is adjusted and an error (ENOMEM) is returned. If the
number is just right, the 'nent' field is adjusted to the number of valid
entries in the 'entries' array, which is then filled.
The entries returned are the host cpuid as returned by the cpuid instruction,
with unknown or unsupported features masked out. The fields in each entry
are defined as follows:
function: the eax value used to obtain the entry
index: the ecx value used to obtain the entry (for entries that are
affected by ecx)
flags: an OR of zero or more of the following:
KVM_CPUID_FLAG_SIGNIFCANT_INDEX:
if the index field is valid
KVM_CPUID_FLAG_STATEFUL_FUNC:
if cpuid for this function returns different values for successive
invocations; there will be several entries with the same function,
all with this flag set
KVM_CPUID_FLAG_STATE_READ_NEXT:
for KVM_CPUID_FLAG_STATEFUL_FUNC entries, set if this entry is
the first entry to be read by a cpu
eax, ebx, ecx, edx: the values returned by the cpuid instruction for
this function/index combination
5. The kvm_run structure 5. The kvm_run structure
Application code obtains a pointer to the kvm_run structure by Application code obtains a pointer to the kvm_run structure by
......
...@@ -77,10 +77,10 @@ Memory ...@@ -77,10 +77,10 @@ Memory
Guest memory (gpa) is part of the user address space of the process that is Guest memory (gpa) is part of the user address space of the process that is
using kvm. Userspace defines the translation between guest addresses and user using kvm. Userspace defines the translation between guest addresses and user
addresses (gpa->hva); note that two gpas may alias to the same gva, but not addresses (gpa->hva); note that two gpas may alias to the same hva, but not
vice versa. vice versa.
These gvas may be backed using any method available to the host: anonymous These hvas may be backed using any method available to the host: anonymous
memory, file backed memory, and device memory. Memory might be paged by the memory, file backed memory, and device memory. Memory might be paged by the
host at any time. host at any time.
...@@ -161,7 +161,7 @@ Shadow pages contain the following information: ...@@ -161,7 +161,7 @@ Shadow pages contain the following information:
role.cr4_pae: role.cr4_pae:
Contains the value of cr4.pae for which the page is valid (e.g. whether Contains the value of cr4.pae for which the page is valid (e.g. whether
32-bit or 64-bit gptes are in use). 32-bit or 64-bit gptes are in use).
role.cr4_nxe: role.nxe:
Contains the value of efer.nxe for which the page is valid. Contains the value of efer.nxe for which the page is valid.
role.cr0_wp: role.cr0_wp:
Contains the value of cr0.wp for which the page is valid. Contains the value of cr0.wp for which the page is valid.
...@@ -180,7 +180,9 @@ Shadow pages contain the following information: ...@@ -180,7 +180,9 @@ Shadow pages contain the following information:
guest pages as leaves. guest pages as leaves.
gfns: gfns:
An array of 512 guest frame numbers, one for each present pte. Used to An array of 512 guest frame numbers, one for each present pte. Used to
perform a reverse map from a pte to a gfn. perform a reverse map from a pte to a gfn. When role.direct is set, any
element of this array can be calculated from the gfn field when used, in
this case, the array of gfns is not allocated. See role.direct and gfn.
slot_bitmap: slot_bitmap:
A bitmap containing one bit per memory slot. If the page contains a pte A bitmap containing one bit per memory slot. If the page contains a pte
mapping a page from memory slot n, then bit n of slot_bitmap will be set mapping a page from memory slot n, then bit n of slot_bitmap will be set
...@@ -296,6 +298,48 @@ Host translation updates: ...@@ -296,6 +298,48 @@ Host translation updates:
- look up affected sptes through reverse map - look up affected sptes through reverse map
- drop (or update) translations - drop (or update) translations
Emulating cr0.wp
================
If tdp is not enabled, the host must keep cr0.wp=1 so page write protection
works for the guest kernel, not guest guest userspace. When the guest
cr0.wp=1, this does not present a problem. However when the guest cr0.wp=0,
we cannot map the permissions for gpte.u=1, gpte.w=0 to any spte (the
semantics require allowing any guest kernel access plus user read access).
We handle this by mapping the permissions to two possible sptes, depending
on fault type:
- kernel write fault: spte.u=0, spte.w=1 (allows full kernel access,
disallows user access)
- read fault: spte.u=1, spte.w=0 (allows full read access, disallows kernel
write access)
(user write faults generate a #PF)
Large pages
===========
The mmu supports all combinations of large and small guest and host pages.
Supported page sizes include 4k, 2M, 4M, and 1G. 4M pages are treated as
two separate 2M pages, on both guest and host, since the mmu always uses PAE
paging.
To instantiate a large spte, four constraints must be satisfied:
- the spte must point to a large host page
- the guest pte must be a large pte of at least equivalent size (if tdp is
enabled, there is no guest pte and this condition is satisified)
- if the spte will be writeable, the large page frame may not overlap any
write-protected pages
- the guest page must be wholly contained by a single memory slot
To check the last two conditions, the mmu maintains a ->write_count set of
arrays for each memory slot and large page size. Every write protected page
causes its write_count to be incremented, thus preventing instantiation of
a large spte. The frames at the end of an unaligned memory slot have
artificically inflated ->write_counts so they can never be instantiated.
Further reading Further reading
=============== ===============
......
KVM-specific MSRs.
Glauber Costa <glommer@redhat.com>, Red Hat Inc, 2010
=====================================================
KVM makes use of some custom MSRs to service some requests.
At present, this facility is only used by kvmclock.
Custom MSRs have a range reserved for them, that goes from
0x4b564d00 to 0x4b564dff. There are MSRs outside this area,
but they are deprecated and their use is discouraged.
Custom MSR list
--------
The current supported Custom MSR list is:
MSR_KVM_WALL_CLOCK_NEW: 0x4b564d00
data: 4-byte alignment physical address of a memory area which must be
in guest RAM. This memory is expected to hold a copy of the following
structure:
struct pvclock_wall_clock {
u32 version;
u32 sec;
u32 nsec;
} __attribute__((__packed__));
whose data will be filled in by the hypervisor. The hypervisor is only
guaranteed to update this data at the moment of MSR write.
Users that want to reliably query this information more than once have
to write more than once to this MSR. Fields have the following meanings:
version: guest has to check version before and after grabbing
time information and check that they are both equal and even.
An odd version indicates an in-progress update.
sec: number of seconds for wallclock.
nsec: number of nanoseconds for wallclock.
Note that although MSRs are per-CPU entities, the effect of this
particular MSR is global.
Availability of this MSR must be checked via bit 3 in 0x4000001 cpuid
leaf prior to usage.
MSR_KVM_SYSTEM_TIME_NEW: 0x4b564d01
data: 4-byte aligned physical address of a memory area which must be in
guest RAM, plus an enable bit in bit 0. This memory is expected to hold
a copy of the following structure:
struct pvclock_vcpu_time_info {
u32 version;
u32 pad0;
u64 tsc_timestamp;
u64 system_time;
u32 tsc_to_system_mul;
s8 tsc_shift;
u8 flags;
u8 pad[2];
} __attribute__((__packed__)); /* 32 bytes */
whose data will be filled in by the hypervisor periodically. Only one
write, or registration, is needed for each VCPU. The interval between
updates of this structure is arbitrary and implementation-dependent.
The hypervisor may update this structure at any time it sees fit until
anything with bit0 == 0 is written to it.
Fields have the following meanings:
version: guest has to check version before and after grabbing
time information and check that they are both equal and even.
An odd version indicates an in-progress update.
tsc_timestamp: the tsc value at the current VCPU at the time
of the update of this structure. Guests can subtract this value
from current tsc to derive a notion of elapsed time since the
structure update.
system_time: a host notion of monotonic time, including sleep
time at the time this structure was last updated. Unit is
nanoseconds.
tsc_to_system_mul: a function of the tsc frequency. One has
to multiply any tsc-related quantity by this value to get
a value in nanoseconds, besides dividing by 2^tsc_shift
tsc_shift: cycle to nanosecond divider, as a power of two, to
allow for shift rights. One has to shift right any tsc-related
quantity by this value to get a value in nanoseconds, besides
multiplying by tsc_to_system_mul.
With this information, guests can derive per-CPU time by
doing:
time = (current_tsc - tsc_timestamp)
time = (time * tsc_to_system_mul) >> tsc_shift
time = time + system_time
flags: bits in this field indicate extended capabilities
coordinated between the guest and the hypervisor. Availability
of specific flags has to be checked in 0x40000001 cpuid leaf.
Current flags are:
flag bit | cpuid bit | meaning
-------------------------------------------------------------
| | time measures taken across
0 | 24 | multiple cpus are guaranteed to
| | be monotonic
-------------------------------------------------------------
Availability of this MSR must be checked via bit 3 in 0x4000001 cpuid
leaf prior to usage.
MSR_KVM_WALL_CLOCK: 0x11
data and functioning: same as MSR_KVM_WALL_CLOCK_NEW. Use that instead.
This MSR falls outside the reserved KVM range and may be removed in the
future. Its usage is deprecated.
Availability of this MSR must be checked via bit 0 in 0x4000001 cpuid
leaf prior to usage.
MSR_KVM_SYSTEM_TIME: 0x12
data and functioning: same as MSR_KVM_SYSTEM_TIME_NEW. Use that instead.
This MSR falls outside the reserved KVM range and may be removed in the
future. Its usage is deprecated.
Availability of this MSR must be checked via bit 0 in 0x4000001 cpuid
leaf prior to usage.
The suggested algorithm for detecting kvmclock presence is then:
if (!kvm_para_available()) /* refer to cpuid.txt */
return NON_PRESENT;
flags = cpuid_eax(0x40000001);
if (flags & 3) {
msr_kvm_system_time = MSR_KVM_SYSTEM_TIME_NEW;
msr_kvm_wall_clock = MSR_KVM_WALL_CLOCK_NEW;
return PRESENT;
} else if (flags & 0) {
msr_kvm_system_time = MSR_KVM_SYSTEM_TIME;
msr_kvm_wall_clock = MSR_KVM_WALL_CLOCK;
return PRESENT;
} else
return NON_PRESENT;
Review checklist for kvm patches
================================
1. The patch must follow Documentation/CodingStyle and
Documentation/SubmittingPatches.
2. Patches should be against kvm.git master branch.
3. If the patch introduces or modifies a new userspace API:
- the API must be documented in Documentation/kvm/api.txt
- the API must be discoverable using KVM_CHECK_EXTENSION
4. New state must include support for save/restore.
5. New features must default to off (userspace should explicitly request them).
Performance improvements can and should default to on.
6. New cpu features should be exposed via KVM_GET_SUPPORTED_CPUID2
7. Emulator changes should be accompanied by unit tests for qemu-kvm.git
kvm/test directory.
8. Changes should be vendor neutral when possible. Changes to common code
are better than duplicating changes to vendor code.
9. Similarly, prefer changes to arch independent code than to arch dependent
code.
10. User/kernel interfaces and guest/host interfaces must be 64-bit clean
(all variables and sizes naturally aligned on 64-bit; use specific types
only - u64 rather than ulong).
11. New guest visible features must either be documented in a hardware manual
or be accompanied by documentation.
12. Features must be robust against reset and kexec - for example, shared
host/guest memory must be unshared to prevent the host from writing to
guest memory that the guest has not reserved for this purpose.
...@@ -235,6 +235,7 @@ struct kvm_vm_data { ...@@ -235,6 +235,7 @@ struct kvm_vm_data {
#define KVM_REQ_PTC_G 32 #define KVM_REQ_PTC_G 32
#define KVM_REQ_RESUME 33 #define KVM_REQ_RESUME 33
#define KVM_HPAGE_GFN_SHIFT(x) 0
#define KVM_NR_PAGE_SIZES 1 #define KVM_NR_PAGE_SIZES 1
#define KVM_PAGES_PER_HPAGE(x) 1 #define KVM_PAGES_PER_HPAGE(x) 1
......
...@@ -725,8 +725,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) ...@@ -725,8 +725,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
int r; int r;
sigset_t sigsaved; sigset_t sigsaved;
vcpu_load(vcpu);
if (vcpu->sigset_active) if (vcpu->sigset_active)
sigprocmask(SIG_SETMASK, &vcpu->sigset, &sigsaved); sigprocmask(SIG_SETMASK, &vcpu->sigset, &sigsaved);
...@@ -748,7 +746,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) ...@@ -748,7 +746,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
if (vcpu->sigset_active) if (vcpu->sigset_active)
sigprocmask(SIG_SETMASK, &sigsaved, NULL); sigprocmask(SIG_SETMASK, &sigsaved, NULL);
vcpu_put(vcpu);
return r; return r;
} }
...@@ -883,8 +880,6 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -883,8 +880,6 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
struct vpd *vpd = to_host(vcpu->kvm, vcpu->arch.vpd); struct vpd *vpd = to_host(vcpu->kvm, vcpu->arch.vpd);
int i; int i;
vcpu_load(vcpu);
for (i = 0; i < 16; i++) { for (i = 0; i < 16; i++) {
vpd->vgr[i] = regs->vpd.vgr[i]; vpd->vgr[i] = regs->vpd.vgr[i];
vpd->vbgr[i] = regs->vpd.vbgr[i]; vpd->vbgr[i] = regs->vpd.vbgr[i];
...@@ -931,8 +926,6 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -931,8 +926,6 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
vcpu->arch.itc_offset = regs->saved_itc - kvm_get_itc(vcpu); vcpu->arch.itc_offset = regs->saved_itc - kvm_get_itc(vcpu);
set_bit(KVM_REQ_RESUME, &vcpu->requests); set_bit(KVM_REQ_RESUME, &vcpu->requests);
vcpu_put(vcpu);
return 0; return 0;
} }
...@@ -1802,35 +1795,24 @@ void kvm_arch_exit(void) ...@@ -1802,35 +1795,24 @@ void kvm_arch_exit(void)
kvm_vmm_info = NULL; kvm_vmm_info = NULL;
} }
static int kvm_ia64_sync_dirty_log(struct kvm *kvm, static void kvm_ia64_sync_dirty_log(struct kvm *kvm,
struct kvm_dirty_log *log) struct kvm_memory_slot *memslot)
{ {
struct kvm_memory_slot *memslot; int i;
int r, i;
long base; long base;
unsigned long n; unsigned long n;
unsigned long *dirty_bitmap = (unsigned long *)(kvm->arch.vm_base + unsigned long *dirty_bitmap = (unsigned long *)(kvm->arch.vm_base +
offsetof(struct kvm_vm_data, kvm_mem_dirty_log)); offsetof(struct kvm_vm_data, kvm_mem_dirty_log));
r = -EINVAL;
if (log->slot >= KVM_MEMORY_SLOTS)
goto out;
memslot = &kvm->memslots->memslots[log->slot];
r = -ENOENT;
if (!memslot->dirty_bitmap)
goto out;
n = kvm_dirty_bitmap_bytes(memslot); n = kvm_dirty_bitmap_bytes(memslot);
base = memslot->base_gfn / BITS_PER_LONG; base = memslot->base_gfn / BITS_PER_LONG;
spin_lock(&kvm->arch.dirty_log_lock);
for (i = 0; i < n/sizeof(long); ++i) { for (i = 0; i < n/sizeof(long); ++i) {
memslot->dirty_bitmap[i] = dirty_bitmap[base + i]; memslot->dirty_bitmap[i] = dirty_bitmap[base + i];
dirty_bitmap[base + i] = 0; dirty_bitmap[base + i] = 0;
} }
r = 0; spin_unlock(&kvm->arch.dirty_log_lock);
out:
return r;
} }
int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
...@@ -1842,12 +1824,17 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, ...@@ -1842,12 +1824,17 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
int is_dirty = 0; int is_dirty = 0;
mutex_lock(&kvm->slots_lock); mutex_lock(&kvm->slots_lock);
spin_lock(&kvm->arch.dirty_log_lock);
r = kvm_ia64_sync_dirty_log(kvm, log); r = -EINVAL;
if (r) if (log->slot >= KVM_MEMORY_SLOTS)
goto out; goto out;
memslot = &kvm->memslots->memslots[log->slot];
r = -ENOENT;
if (!memslot->dirty_bitmap)
goto out;
kvm_ia64_sync_dirty_log(kvm, memslot);
r = kvm_get_dirty_log(kvm, log, &is_dirty); r = kvm_get_dirty_log(kvm, log, &is_dirty);
if (r) if (r)
goto out; goto out;
...@@ -1855,14 +1842,12 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, ...@@ -1855,14 +1842,12 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
/* If nothing is dirty, don't bother messing with page tables. */ /* If nothing is dirty, don't bother messing with page tables. */
if (is_dirty) { if (is_dirty) {
kvm_flush_remote_tlbs(kvm); kvm_flush_remote_tlbs(kvm);
memslot = &kvm->memslots->memslots[log->slot];
n = kvm_dirty_bitmap_bytes(memslot); n = kvm_dirty_bitmap_bytes(memslot);
memset(memslot->dirty_bitmap, 0, n); memset(memslot->dirty_bitmap, 0, n);
} }
r = 0; r = 0;
out: out:
mutex_unlock(&kvm->slots_lock); mutex_unlock(&kvm->slots_lock);
spin_unlock(&kvm->arch.dirty_log_lock);
return r; return r;
} }
...@@ -1953,11 +1938,6 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu) ...@@ -1953,11 +1938,6 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
return vcpu->arch.timer_fired; return vcpu->arch.timer_fired;
} }
gfn_t unalias_gfn(struct kvm *kvm, gfn_t gfn)
{
return gfn;
}
int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
{ {
return (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE) || return (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE) ||
...@@ -1967,9 +1947,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) ...@@ -1967,9 +1947,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu, int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
struct kvm_mp_state *mp_state) struct kvm_mp_state *mp_state)
{ {
vcpu_load(vcpu);
mp_state->mp_state = vcpu->arch.mp_state; mp_state->mp_state = vcpu->arch.mp_state;
vcpu_put(vcpu);
return 0; return 0;
} }
...@@ -2000,10 +1978,8 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu, ...@@ -2000,10 +1978,8 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
{ {
int r = 0; int r = 0;
vcpu_load(vcpu);
vcpu->arch.mp_state = mp_state->mp_state; vcpu->arch.mp_state = mp_state->mp_state;
if (vcpu->arch.mp_state == KVM_MP_STATE_UNINITIALIZED) if (vcpu->arch.mp_state == KVM_MP_STATE_UNINITIALIZED)
r = vcpu_reset(vcpu); r = vcpu_reset(vcpu);
vcpu_put(vcpu);
return r; return r;
} }
...@@ -115,7 +115,15 @@ extern void kvmppc_mmu_book3s_32_init(struct kvm_vcpu *vcpu); ...@@ -115,7 +115,15 @@ extern void kvmppc_mmu_book3s_32_init(struct kvm_vcpu *vcpu);
extern int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte); extern int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte);
extern int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr); extern int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr);
extern void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu); extern void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu);
extern struct kvmppc_pte *kvmppc_mmu_find_pte(struct kvm_vcpu *vcpu, u64 ea, bool data);
extern void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte);
extern struct hpte_cache *kvmppc_mmu_hpte_cache_next(struct kvm_vcpu *vcpu);
extern void kvmppc_mmu_hpte_destroy(struct kvm_vcpu *vcpu);
extern int kvmppc_mmu_hpte_init(struct kvm_vcpu *vcpu);
extern void kvmppc_mmu_invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte);
extern int kvmppc_mmu_hpte_sysinit(void);
extern void kvmppc_mmu_hpte_sysexit(void);
extern int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, bool data); extern int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, bool data);
extern int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, bool data); extern int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, bool data);
extern void kvmppc_book3s_queue_irqprio(struct kvm_vcpu *vcpu, unsigned int vec); extern void kvmppc_book3s_queue_irqprio(struct kvm_vcpu *vcpu, unsigned int vec);
......
...@@ -22,24 +22,24 @@ ...@@ -22,24 +22,24 @@
#include <linux/types.h> #include <linux/types.h>
extern void fps_fres(struct thread_struct *t, u32 *dst, u32 *src1); extern void fps_fres(u64 *fpscr, u32 *dst, u32 *src1);
extern void fps_frsqrte(struct thread_struct *t, u32 *dst, u32 *src1); extern void fps_frsqrte(u64 *fpscr, u32 *dst, u32 *src1);
extern void fps_fsqrts(struct thread_struct *t, u32 *dst, u32 *src1); extern void fps_fsqrts(u64 *fpscr, u32 *dst, u32 *src1);
extern void fps_fadds(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2); extern void fps_fadds(u64 *fpscr, u32 *dst, u32 *src1, u32 *src2);
extern void fps_fdivs(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2); extern void fps_fdivs(u64 *fpscr, u32 *dst, u32 *src1, u32 *src2);
extern void fps_fmuls(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2); extern void fps_fmuls(u64 *fpscr, u32 *dst, u32 *src1, u32 *src2);
extern void fps_fsubs(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2); extern void fps_fsubs(u64 *fpscr, u32 *dst, u32 *src1, u32 *src2);
extern void fps_fmadds(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2, extern void fps_fmadds(u64 *fpscr, u32 *dst, u32 *src1, u32 *src2,
u32 *src3); u32 *src3);
extern void fps_fmsubs(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2, extern void fps_fmsubs(u64 *fpscr, u32 *dst, u32 *src1, u32 *src2,
u32 *src3); u32 *src3);
extern void fps_fnmadds(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2, extern void fps_fnmadds(u64 *fpscr, u32 *dst, u32 *src1, u32 *src2,
u32 *src3); u32 *src3);
extern void fps_fnmsubs(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2, extern void fps_fnmsubs(u64 *fpscr, u32 *dst, u32 *src1, u32 *src2,
u32 *src3); u32 *src3);
extern void fps_fsel(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2, extern void fps_fsel(u64 *fpscr, u32 *dst, u32 *src1, u32 *src2,
u32 *src3); u32 *src3);
#define FPD_ONE_IN(name) extern void fpd_ ## name(u64 *fpscr, u32 *cr, \ #define FPD_ONE_IN(name) extern void fpd_ ## name(u64 *fpscr, u32 *cr, \
...@@ -82,4 +82,7 @@ FPD_THREE_IN(fmadd) ...@@ -82,4 +82,7 @@ FPD_THREE_IN(fmadd)
FPD_THREE_IN(fnmsub) FPD_THREE_IN(fnmsub)
FPD_THREE_IN(fnmadd) FPD_THREE_IN(fnmadd)
extern void kvm_cvt_fd(u32 *from, u64 *to, u64 *fpscr);
extern void kvm_cvt_df(u64 *from, u32 *to, u64 *fpscr);
#endif #endif
...@@ -35,10 +35,17 @@ ...@@ -35,10 +35,17 @@
#define KVM_COALESCED_MMIO_PAGE_OFFSET 1 #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
/* We don't currently support large pages. */ /* We don't currently support large pages. */
#define KVM_HPAGE_GFN_SHIFT(x) 0
#define KVM_NR_PAGE_SIZES 1 #define KVM_NR_PAGE_SIZES 1
#define KVM_PAGES_PER_HPAGE(x) (1UL<<31) #define KVM_PAGES_PER_HPAGE(x) (1UL<<31)
#define HPTEG_CACHE_NUM 1024 #define HPTEG_CACHE_NUM (1 << 15)
#define HPTEG_HASH_BITS_PTE 13
#define HPTEG_HASH_BITS_VPTE 13
#define HPTEG_HASH_BITS_VPTE_LONG 5
#define HPTEG_HASH_NUM_PTE (1 << HPTEG_HASH_BITS_PTE)
#define HPTEG_HASH_NUM_VPTE (1 << HPTEG_HASH_BITS_VPTE)
#define HPTEG_HASH_NUM_VPTE_LONG (1 << HPTEG_HASH_BITS_VPTE_LONG)
struct kvm; struct kvm;
struct kvm_run; struct kvm_run;
...@@ -151,6 +158,9 @@ struct kvmppc_mmu { ...@@ -151,6 +158,9 @@ struct kvmppc_mmu {
}; };
struct hpte_cache { struct hpte_cache {
struct hlist_node list_pte;
struct hlist_node list_vpte;
struct hlist_node list_vpte_long;
u64 host_va; u64 host_va;
u64 pfn; u64 pfn;
ulong slot; ulong slot;
...@@ -282,8 +292,10 @@ struct kvm_vcpu_arch { ...@@ -282,8 +292,10 @@ struct kvm_vcpu_arch {
unsigned long pending_exceptions; unsigned long pending_exceptions;
#ifdef CONFIG_PPC_BOOK3S #ifdef CONFIG_PPC_BOOK3S
struct hpte_cache hpte_cache[HPTEG_CACHE_NUM]; struct hlist_head hpte_hash_pte[HPTEG_HASH_NUM_PTE];
int hpte_cache_offset; struct hlist_head hpte_hash_vpte[HPTEG_HASH_NUM_VPTE];
struct hlist_head hpte_hash_vpte_long[HPTEG_HASH_NUM_VPTE_LONG];
int hpte_cache_count;
#endif #endif
}; };
......
...@@ -101,10 +101,6 @@ EXPORT_SYMBOL(pci_dram_offset); ...@@ -101,10 +101,6 @@ EXPORT_SYMBOL(pci_dram_offset);
EXPORT_SYMBOL(start_thread); EXPORT_SYMBOL(start_thread);
EXPORT_SYMBOL(kernel_thread); EXPORT_SYMBOL(kernel_thread);
#ifdef CONFIG_PPC_FPU
EXPORT_SYMBOL_GPL(cvt_df);
EXPORT_SYMBOL_GPL(cvt_fd);
#endif
EXPORT_SYMBOL(giveup_fpu); EXPORT_SYMBOL(giveup_fpu);
#ifdef CONFIG_ALTIVEC #ifdef CONFIG_ALTIVEC
EXPORT_SYMBOL(giveup_altivec); EXPORT_SYMBOL(giveup_altivec);
......
...@@ -316,7 +316,8 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 gvaddr, gpa_t gpaddr, ...@@ -316,7 +316,8 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 gvaddr, gpa_t gpaddr,
gfn = gpaddr >> PAGE_SHIFT; gfn = gpaddr >> PAGE_SHIFT;
new_page = gfn_to_page(vcpu->kvm, gfn); new_page = gfn_to_page(vcpu->kvm, gfn);
if (is_error_page(new_page)) { if (is_error_page(new_page)) {
printk(KERN_ERR "Couldn't get guest page for gfn %lx!\n", gfn); printk(KERN_ERR "Couldn't get guest page for gfn %llx!\n",
(unsigned long long)gfn);
kvm_release_page_clean(new_page); kvm_release_page_clean(new_page);
return; return;
} }
......
...@@ -45,6 +45,7 @@ kvm-book3s_64-objs := \ ...@@ -45,6 +45,7 @@ kvm-book3s_64-objs := \
book3s.o \ book3s.o \
book3s_emulate.o \ book3s_emulate.o \
book3s_interrupts.o \ book3s_interrupts.o \
book3s_mmu_hpte.o \
book3s_64_mmu_host.o \ book3s_64_mmu_host.o \
book3s_64_mmu.o \ book3s_64_mmu.o \
book3s_32_mmu.o book3s_32_mmu.o
...@@ -57,6 +58,7 @@ kvm-book3s_32-objs := \ ...@@ -57,6 +58,7 @@ kvm-book3s_32-objs := \
book3s.o \ book3s.o \
book3s_emulate.o \ book3s_emulate.o \
book3s_interrupts.o \ book3s_interrupts.o \
book3s_mmu_hpte.o \
book3s_32_mmu_host.o \ book3s_32_mmu_host.o \
book3s_32_mmu.o book3s_32_mmu.o
kvm-objs-$(CONFIG_KVM_BOOK3S_32) := $(kvm-book3s_32-objs) kvm-objs-$(CONFIG_KVM_BOOK3S_32) := $(kvm-book3s_32-objs)
......
...@@ -1047,8 +1047,6 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -1047,8 +1047,6 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
{ {
int i; int i;
vcpu_load(vcpu);
regs->pc = kvmppc_get_pc(vcpu); regs->pc = kvmppc_get_pc(vcpu);
regs->cr = kvmppc_get_cr(vcpu); regs->cr = kvmppc_get_cr(vcpu);
regs->ctr = kvmppc_get_ctr(vcpu); regs->ctr = kvmppc_get_ctr(vcpu);
...@@ -1069,8 +1067,6 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -1069,8 +1067,6 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
for (i = 0; i < ARRAY_SIZE(regs->gpr); i++) for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)
regs->gpr[i] = kvmppc_get_gpr(vcpu, i); regs->gpr[i] = kvmppc_get_gpr(vcpu, i);
vcpu_put(vcpu);
return 0; return 0;
} }
...@@ -1078,8 +1074,6 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -1078,8 +1074,6 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
{ {
int i; int i;
vcpu_load(vcpu);
kvmppc_set_pc(vcpu, regs->pc); kvmppc_set_pc(vcpu, regs->pc);
kvmppc_set_cr(vcpu, regs->cr); kvmppc_set_cr(vcpu, regs->cr);
kvmppc_set_ctr(vcpu, regs->ctr); kvmppc_set_ctr(vcpu, regs->ctr);
...@@ -1099,8 +1093,6 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -1099,8 +1093,6 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
for (i = 0; i < ARRAY_SIZE(regs->gpr); i++) for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)
kvmppc_set_gpr(vcpu, i, regs->gpr[i]); kvmppc_set_gpr(vcpu, i, regs->gpr[i]);
vcpu_put(vcpu);
return 0; return 0;
} }
...@@ -1110,8 +1102,6 @@ int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu, ...@@ -1110,8 +1102,6 @@ int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu); struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
int i; int i;
vcpu_load(vcpu);
sregs->pvr = vcpu->arch.pvr; sregs->pvr = vcpu->arch.pvr;
sregs->u.s.sdr1 = to_book3s(vcpu)->sdr1; sregs->u.s.sdr1 = to_book3s(vcpu)->sdr1;
...@@ -1131,8 +1121,6 @@ int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu, ...@@ -1131,8 +1121,6 @@ int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
} }
} }
vcpu_put(vcpu);
return 0; return 0;
} }
...@@ -1142,8 +1130,6 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, ...@@ -1142,8 +1130,6 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu); struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
int i; int i;
vcpu_load(vcpu);
kvmppc_set_pvr(vcpu, sregs->pvr); kvmppc_set_pvr(vcpu, sregs->pvr);
vcpu3s->sdr1 = sregs->u.s.sdr1; vcpu3s->sdr1 = sregs->u.s.sdr1;
...@@ -1171,8 +1157,6 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, ...@@ -1171,8 +1157,6 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
/* Flush the MMU after messing with the segments */ /* Flush the MMU after messing with the segments */
kvmppc_mmu_pte_flush(vcpu, 0, 0); kvmppc_mmu_pte_flush(vcpu, 0, 0);
vcpu_put(vcpu);
return 0; return 0;
} }
...@@ -1309,12 +1293,17 @@ extern int __kvmppc_vcpu_entry(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu); ...@@ -1309,12 +1293,17 @@ extern int __kvmppc_vcpu_entry(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu);
int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
{ {
int ret; int ret;
struct thread_struct ext_bkp; double fpr[32][TS_FPRWIDTH];
unsigned int fpscr;
int fpexc_mode;
#ifdef CONFIG_ALTIVEC #ifdef CONFIG_ALTIVEC
bool save_vec = current->thread.used_vr; vector128 vr[32];
vector128 vscr;
unsigned long uninitialized_var(vrsave);
int used_vr;
#endif #endif
#ifdef CONFIG_VSX #ifdef CONFIG_VSX
bool save_vsx = current->thread.used_vsr; int used_vsr;
#endif #endif
ulong ext_msr; ulong ext_msr;
...@@ -1327,27 +1316,27 @@ int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) ...@@ -1327,27 +1316,27 @@ int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
/* Save FPU state in stack */ /* Save FPU state in stack */
if (current->thread.regs->msr & MSR_FP) if (current->thread.regs->msr & MSR_FP)
giveup_fpu(current); giveup_fpu(current);
memcpy(ext_bkp.fpr, current->thread.fpr, sizeof(current->thread.fpr)); memcpy(fpr, current->thread.fpr, sizeof(current->thread.fpr));
ext_bkp.fpscr = current->thread.fpscr; fpscr = current->thread.fpscr.val;
ext_bkp.fpexc_mode = current->thread.fpexc_mode; fpexc_mode = current->thread.fpexc_mode;
#ifdef CONFIG_ALTIVEC #ifdef CONFIG_ALTIVEC
/* Save Altivec state in stack */ /* Save Altivec state in stack */
if (save_vec) { used_vr = current->thread.used_vr;
if (used_vr) {
if (current->thread.regs->msr & MSR_VEC) if (current->thread.regs->msr & MSR_VEC)
giveup_altivec(current); giveup_altivec(current);
memcpy(ext_bkp.vr, current->thread.vr, sizeof(ext_bkp.vr)); memcpy(vr, current->thread.vr, sizeof(current->thread.vr));
ext_bkp.vscr = current->thread.vscr; vscr = current->thread.vscr;
ext_bkp.vrsave = current->thread.vrsave; vrsave = current->thread.vrsave;
} }
ext_bkp.used_vr = current->thread.used_vr;
#endif #endif
#ifdef CONFIG_VSX #ifdef CONFIG_VSX
/* Save VSX state in stack */ /* Save VSX state in stack */
if (save_vsx && (current->thread.regs->msr & MSR_VSX)) used_vsr = current->thread.used_vsr;
if (used_vsr && (current->thread.regs->msr & MSR_VSX))
__giveup_vsx(current); __giveup_vsx(current);
ext_bkp.used_vsr = current->thread.used_vsr;
#endif #endif
/* Remember the MSR with disabled extensions */ /* Remember the MSR with disabled extensions */
...@@ -1372,22 +1361,22 @@ int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) ...@@ -1372,22 +1361,22 @@ int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
kvmppc_giveup_ext(vcpu, MSR_VSX); kvmppc_giveup_ext(vcpu, MSR_VSX);
/* Restore FPU state from stack */ /* Restore FPU state from stack */
memcpy(current->thread.fpr, ext_bkp.fpr, sizeof(ext_bkp.fpr)); memcpy(current->thread.fpr, fpr, sizeof(current->thread.fpr));
current->thread.fpscr = ext_bkp.fpscr; current->thread.fpscr.val = fpscr;
current->thread.fpexc_mode = ext_bkp.fpexc_mode; current->thread.fpexc_mode = fpexc_mode;
#ifdef CONFIG_ALTIVEC #ifdef CONFIG_ALTIVEC
/* Restore Altivec state from stack */ /* Restore Altivec state from stack */
if (save_vec && current->thread.used_vr) { if (used_vr && current->thread.used_vr) {
memcpy(current->thread.vr, ext_bkp.vr, sizeof(ext_bkp.vr)); memcpy(current->thread.vr, vr, sizeof(current->thread.vr));
current->thread.vscr = ext_bkp.vscr; current->thread.vscr = vscr;
current->thread.vrsave= ext_bkp.vrsave; current->thread.vrsave = vrsave;
} }
current->thread.used_vr = ext_bkp.used_vr; current->thread.used_vr = used_vr;
#endif #endif
#ifdef CONFIG_VSX #ifdef CONFIG_VSX
current->thread.used_vsr = ext_bkp.used_vsr; current->thread.used_vsr = used_vsr;
#endif #endif
return ret; return ret;
...@@ -1395,12 +1384,22 @@ int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) ...@@ -1395,12 +1384,22 @@ int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
static int kvmppc_book3s_init(void) static int kvmppc_book3s_init(void)
{ {
return kvm_init(NULL, sizeof(struct kvmppc_vcpu_book3s), 0, int r;
r = kvm_init(NULL, sizeof(struct kvmppc_vcpu_book3s), 0,
THIS_MODULE); THIS_MODULE);
if (r)
return r;
r = kvmppc_mmu_hpte_sysinit();
return r;
} }
static void kvmppc_book3s_exit(void) static void kvmppc_book3s_exit(void)
{ {
kvmppc_mmu_hpte_sysexit();
kvm_exit(); kvm_exit();
} }
......
...@@ -354,10 +354,10 @@ static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid, ...@@ -354,10 +354,10 @@ static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
*vsid = VSID_REAL_DR | gvsid; *vsid = VSID_REAL_DR | gvsid;
break; break;
case MSR_DR|MSR_IR: case MSR_DR|MSR_IR:
if (!sr->valid) if (sr->valid)
return -1;
*vsid = sr->vsid; *vsid = sr->vsid;
else
*vsid = VSID_BAT | gvsid;
break; break;
default: default:
BUG(); BUG();
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
*/ */
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include <linux/hash.h>
#include <asm/kvm_ppc.h> #include <asm/kvm_ppc.h>
#include <asm/kvm_book3s.h> #include <asm/kvm_book3s.h>
...@@ -57,139 +58,26 @@ ...@@ -57,139 +58,26 @@
static ulong htab; static ulong htab;
static u32 htabmask; static u32 htabmask;
static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte) void kvmppc_mmu_invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
{ {
volatile u32 *pteg; volatile u32 *pteg;
dprintk_mmu("KVM: Flushing SPTE: 0x%llx (0x%llx) -> 0x%llx\n", /* Remove from host HTAB */
pte->pte.eaddr, pte->pte.vpage, pte->host_va);
pteg = (u32*)pte->slot; pteg = (u32*)pte->slot;
pteg[0] = 0; pteg[0] = 0;
/* And make sure it's gone from the TLB too */
asm volatile ("sync"); asm volatile ("sync");
asm volatile ("tlbie %0" : : "r" (pte->pte.eaddr) : "memory"); asm volatile ("tlbie %0" : : "r" (pte->pte.eaddr) : "memory");
asm volatile ("sync"); asm volatile ("sync");
asm volatile ("tlbsync"); asm volatile ("tlbsync");
pte->host_va = 0;
if (pte->pte.may_write)
kvm_release_pfn_dirty(pte->pfn);
else
kvm_release_pfn_clean(pte->pfn);
}
void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong guest_ea, ulong ea_mask)
{
int i;
dprintk_mmu("KVM: Flushing %d Shadow PTEs: 0x%x & 0x%x\n",
vcpu->arch.hpte_cache_offset, guest_ea, ea_mask);
BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
guest_ea &= ea_mask;
for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
struct hpte_cache *pte;
pte = &vcpu->arch.hpte_cache[i];
if (!pte->host_va)
continue;
if ((pte->pte.eaddr & ea_mask) == guest_ea) {
invalidate_pte(vcpu, pte);
}
}
/* Doing a complete flush -> start from scratch */
if (!ea_mask)
vcpu->arch.hpte_cache_offset = 0;
}
void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 guest_vp, u64 vp_mask)
{
int i;
dprintk_mmu("KVM: Flushing %d Shadow vPTEs: 0x%llx & 0x%llx\n",
vcpu->arch.hpte_cache_offset, guest_vp, vp_mask);
BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
guest_vp &= vp_mask;
for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
struct hpte_cache *pte;
pte = &vcpu->arch.hpte_cache[i];
if (!pte->host_va)
continue;
if ((pte->pte.vpage & vp_mask) == guest_vp) {
invalidate_pte(vcpu, pte);
}
}
}
void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong pa_end)
{
int i;
dprintk_mmu("KVM: Flushing %d Shadow pPTEs: 0x%llx & 0x%llx\n",
vcpu->arch.hpte_cache_offset, pa_start, pa_end);
BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
struct hpte_cache *pte;
pte = &vcpu->arch.hpte_cache[i];
if (!pte->host_va)
continue;
if ((pte->pte.raddr >= pa_start) &&
(pte->pte.raddr < pa_end)) {
invalidate_pte(vcpu, pte);
}
}
}
struct kvmppc_pte *kvmppc_mmu_find_pte(struct kvm_vcpu *vcpu, u64 ea, bool data)
{
int i;
u64 guest_vp;
guest_vp = vcpu->arch.mmu.ea_to_vp(vcpu, ea, false);
for (i=0; i<vcpu->arch.hpte_cache_offset; i++) {
struct hpte_cache *pte;
pte = &vcpu->arch.hpte_cache[i];
if (!pte->host_va)
continue;
if (pte->pte.vpage == guest_vp)
return &pte->pte;
}
return NULL;
}
static int kvmppc_mmu_hpte_cache_next(struct kvm_vcpu *vcpu)
{
if (vcpu->arch.hpte_cache_offset == HPTEG_CACHE_NUM)
kvmppc_mmu_pte_flush(vcpu, 0, 0);
return vcpu->arch.hpte_cache_offset++;
} }
/* We keep 512 gvsid->hvsid entries, mapping the guest ones to the array using /* We keep 512 gvsid->hvsid entries, mapping the guest ones to the array using
* a hash, so we don't waste cycles on looping */ * a hash, so we don't waste cycles on looping */
static u16 kvmppc_sid_hash(struct kvm_vcpu *vcpu, u64 gvsid) static u16 kvmppc_sid_hash(struct kvm_vcpu *vcpu, u64 gvsid)
{ {
return (u16)(((gvsid >> (SID_MAP_BITS * 7)) & SID_MAP_MASK) ^ return hash_64(gvsid, SID_MAP_BITS);
((gvsid >> (SID_MAP_BITS * 6)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 5)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 4)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 3)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 2)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 1)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 0)) & SID_MAP_MASK));
} }
...@@ -256,7 +144,6 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte) ...@@ -256,7 +144,6 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
register int rr = 0; register int rr = 0;
bool primary = false; bool primary = false;
bool evict = false; bool evict = false;
int hpte_id;
struct hpte_cache *pte; struct hpte_cache *pte;
/* Get host physical address for gpa */ /* Get host physical address for gpa */
...@@ -341,8 +228,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte) ...@@ -341,8 +228,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
/* Now tell our Shadow PTE code about the new page */ /* Now tell our Shadow PTE code about the new page */
hpte_id = kvmppc_mmu_hpte_cache_next(vcpu); pte = kvmppc_mmu_hpte_cache_next(vcpu);
pte = &vcpu->arch.hpte_cache[hpte_id];
dprintk_mmu("KVM: %c%c Map 0x%llx: [%lx] 0x%llx (0x%llx) -> %lx\n", dprintk_mmu("KVM: %c%c Map 0x%llx: [%lx] 0x%llx (0x%llx) -> %lx\n",
orig_pte->may_write ? 'w' : '-', orig_pte->may_write ? 'w' : '-',
...@@ -355,6 +241,8 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte) ...@@ -355,6 +241,8 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
pte->pte = *orig_pte; pte->pte = *orig_pte;
pte->pfn = hpaddr >> PAGE_SHIFT; pte->pfn = hpaddr >> PAGE_SHIFT;
kvmppc_mmu_hpte_cache_map(vcpu, pte);
return 0; return 0;
} }
...@@ -439,7 +327,7 @@ void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu) ...@@ -439,7 +327,7 @@ void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu)
void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu) void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu)
{ {
kvmppc_mmu_pte_flush(vcpu, 0, 0); kvmppc_mmu_hpte_destroy(vcpu);
preempt_disable(); preempt_disable();
__destroy_context(to_book3s(vcpu)->context_id); __destroy_context(to_book3s(vcpu)->context_id);
preempt_enable(); preempt_enable();
...@@ -479,5 +367,7 @@ int kvmppc_mmu_init(struct kvm_vcpu *vcpu) ...@@ -479,5 +367,7 @@ int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
htabmask = ((sdr1 & 0x1FF) << 16) | 0xFFC0; htabmask = ((sdr1 & 0x1FF) << 16) | 0xFFC0;
htab = (ulong)__va(sdr1 & 0xffff0000); htab = (ulong)__va(sdr1 & 0xffff0000);
kvmppc_mmu_hpte_init(vcpu);
return 0; return 0;
} }
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
*/ */
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include <linux/hash.h>
#include <asm/kvm_ppc.h> #include <asm/kvm_ppc.h>
#include <asm/kvm_book3s.h> #include <asm/kvm_book3s.h>
...@@ -46,135 +47,20 @@ ...@@ -46,135 +47,20 @@
#define dprintk_slb(a, ...) do { } while(0) #define dprintk_slb(a, ...) do { } while(0)
#endif #endif
static void invalidate_pte(struct hpte_cache *pte) void kvmppc_mmu_invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
{ {
dprintk_mmu("KVM: Flushing SPT: 0x%lx (0x%llx) -> 0x%llx\n",
pte->pte.eaddr, pte->pte.vpage, pte->host_va);
ppc_md.hpte_invalidate(pte->slot, pte->host_va, ppc_md.hpte_invalidate(pte->slot, pte->host_va,
MMU_PAGE_4K, MMU_SEGSIZE_256M, MMU_PAGE_4K, MMU_SEGSIZE_256M,
false); false);
pte->host_va = 0;
if (pte->pte.may_write)
kvm_release_pfn_dirty(pte->pfn);
else
kvm_release_pfn_clean(pte->pfn);
}
void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong guest_ea, ulong ea_mask)
{
int i;
dprintk_mmu("KVM: Flushing %d Shadow PTEs: 0x%lx & 0x%lx\n",
vcpu->arch.hpte_cache_offset, guest_ea, ea_mask);
BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
guest_ea &= ea_mask;
for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
struct hpte_cache *pte;
pte = &vcpu->arch.hpte_cache[i];
if (!pte->host_va)
continue;
if ((pte->pte.eaddr & ea_mask) == guest_ea) {
invalidate_pte(pte);
}
}
/* Doing a complete flush -> start from scratch */
if (!ea_mask)
vcpu->arch.hpte_cache_offset = 0;
}
void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 guest_vp, u64 vp_mask)
{
int i;
dprintk_mmu("KVM: Flushing %d Shadow vPTEs: 0x%llx & 0x%llx\n",
vcpu->arch.hpte_cache_offset, guest_vp, vp_mask);
BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
guest_vp &= vp_mask;
for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
struct hpte_cache *pte;
pte = &vcpu->arch.hpte_cache[i];
if (!pte->host_va)
continue;
if ((pte->pte.vpage & vp_mask) == guest_vp) {
invalidate_pte(pte);
}
}
}
void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong pa_end)
{
int i;
dprintk_mmu("KVM: Flushing %d Shadow pPTEs: 0x%lx & 0x%lx\n",
vcpu->arch.hpte_cache_offset, pa_start, pa_end);
BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
struct hpte_cache *pte;
pte = &vcpu->arch.hpte_cache[i];
if (!pte->host_va)
continue;
if ((pte->pte.raddr >= pa_start) &&
(pte->pte.raddr < pa_end)) {
invalidate_pte(pte);
}
}
}
struct kvmppc_pte *kvmppc_mmu_find_pte(struct kvm_vcpu *vcpu, u64 ea, bool data)
{
int i;
u64 guest_vp;
guest_vp = vcpu->arch.mmu.ea_to_vp(vcpu, ea, false);
for (i=0; i<vcpu->arch.hpte_cache_offset; i++) {
struct hpte_cache *pte;
pte = &vcpu->arch.hpte_cache[i];
if (!pte->host_va)
continue;
if (pte->pte.vpage == guest_vp)
return &pte->pte;
}
return NULL;
}
static int kvmppc_mmu_hpte_cache_next(struct kvm_vcpu *vcpu)
{
if (vcpu->arch.hpte_cache_offset == HPTEG_CACHE_NUM)
kvmppc_mmu_pte_flush(vcpu, 0, 0);
return vcpu->arch.hpte_cache_offset++;
} }
/* We keep 512 gvsid->hvsid entries, mapping the guest ones to the array using /* We keep 512 gvsid->hvsid entries, mapping the guest ones to the array using
* a hash, so we don't waste cycles on looping */ * a hash, so we don't waste cycles on looping */
static u16 kvmppc_sid_hash(struct kvm_vcpu *vcpu, u64 gvsid) static u16 kvmppc_sid_hash(struct kvm_vcpu *vcpu, u64 gvsid)
{ {
return (u16)(((gvsid >> (SID_MAP_BITS * 7)) & SID_MAP_MASK) ^ return hash_64(gvsid, SID_MAP_BITS);
((gvsid >> (SID_MAP_BITS * 6)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 5)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 4)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 3)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 2)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 1)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 0)) & SID_MAP_MASK));
} }
static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid) static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
{ {
struct kvmppc_sid_map *map; struct kvmppc_sid_map *map;
...@@ -273,8 +159,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte) ...@@ -273,8 +159,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
attempt++; attempt++;
goto map_again; goto map_again;
} else { } else {
int hpte_id = kvmppc_mmu_hpte_cache_next(vcpu); struct hpte_cache *pte = kvmppc_mmu_hpte_cache_next(vcpu);
struct hpte_cache *pte = &vcpu->arch.hpte_cache[hpte_id];
dprintk_mmu("KVM: %c%c Map 0x%lx: [%lx] 0x%lx (0x%llx) -> %lx\n", dprintk_mmu("KVM: %c%c Map 0x%lx: [%lx] 0x%lx (0x%llx) -> %lx\n",
((rflags & HPTE_R_PP) == 3) ? '-' : 'w', ((rflags & HPTE_R_PP) == 3) ? '-' : 'w',
...@@ -292,6 +177,8 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte) ...@@ -292,6 +177,8 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
pte->host_va = va; pte->host_va = va;
pte->pte = *orig_pte; pte->pte = *orig_pte;
pte->pfn = hpaddr >> PAGE_SHIFT; pte->pfn = hpaddr >> PAGE_SHIFT;
kvmppc_mmu_hpte_cache_map(vcpu, pte);
} }
return 0; return 0;
...@@ -418,7 +305,7 @@ void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu) ...@@ -418,7 +305,7 @@ void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu)
void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu) void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu)
{ {
kvmppc_mmu_pte_flush(vcpu, 0, 0); kvmppc_mmu_hpte_destroy(vcpu);
__destroy_context(to_book3s(vcpu)->context_id); __destroy_context(to_book3s(vcpu)->context_id);
} }
...@@ -436,5 +323,7 @@ int kvmppc_mmu_init(struct kvm_vcpu *vcpu) ...@@ -436,5 +323,7 @@ int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
vcpu3s->vsid_first = vcpu3s->context_id << USER_ESID_BITS; vcpu3s->vsid_first = vcpu3s->context_id << USER_ESID_BITS;
vcpu3s->vsid_next = vcpu3s->vsid_first; vcpu3s->vsid_next = vcpu3s->vsid_first;
kvmppc_mmu_hpte_init(vcpu);
return 0; return 0;
} }
/*
* Copyright (C) 2010 SUSE Linux Products GmbH. All rights reserved.
*
* Authors:
* Alexander Graf <agraf@suse.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <linux/kvm_host.h>
#include <linux/hash.h>
#include <linux/slab.h>
#include <asm/kvm_ppc.h>
#include <asm/kvm_book3s.h>
#include <asm/machdep.h>
#include <asm/mmu_context.h>
#include <asm/hw_irq.h>
#define PTE_SIZE 12
/* #define DEBUG_MMU */
#ifdef DEBUG_MMU
#define dprintk_mmu(a, ...) printk(KERN_INFO a, __VA_ARGS__)
#else
#define dprintk_mmu(a, ...) do { } while(0)
#endif
static struct kmem_cache *hpte_cache;
static inline u64 kvmppc_mmu_hash_pte(u64 eaddr)
{
return hash_64(eaddr >> PTE_SIZE, HPTEG_HASH_BITS_PTE);
}
static inline u64 kvmppc_mmu_hash_vpte(u64 vpage)
{
return hash_64(vpage & 0xfffffffffULL, HPTEG_HASH_BITS_VPTE);
}
static inline u64 kvmppc_mmu_hash_vpte_long(u64 vpage)
{
return hash_64((vpage & 0xffffff000ULL) >> 12,
HPTEG_HASH_BITS_VPTE_LONG);
}
void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
{
u64 index;
/* Add to ePTE list */
index = kvmppc_mmu_hash_pte(pte->pte.eaddr);
hlist_add_head(&pte->list_pte, &vcpu->arch.hpte_hash_pte[index]);
/* Add to vPTE list */
index = kvmppc_mmu_hash_vpte(pte->pte.vpage);
hlist_add_head(&pte->list_vpte, &vcpu->arch.hpte_hash_vpte[index]);
/* Add to vPTE_long list */
index = kvmppc_mmu_hash_vpte_long(pte->pte.vpage);
hlist_add_head(&pte->list_vpte_long,
&vcpu->arch.hpte_hash_vpte_long[index]);
}
static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
{
dprintk_mmu("KVM: Flushing SPT: 0x%lx (0x%llx) -> 0x%llx\n",
pte->pte.eaddr, pte->pte.vpage, pte->host_va);
/* Different for 32 and 64 bit */
kvmppc_mmu_invalidate_pte(vcpu, pte);
if (pte->pte.may_write)
kvm_release_pfn_dirty(pte->pfn);
else
kvm_release_pfn_clean(pte->pfn);
hlist_del(&pte->list_pte);
hlist_del(&pte->list_vpte);
hlist_del(&pte->list_vpte_long);
vcpu->arch.hpte_cache_count--;
kmem_cache_free(hpte_cache, pte);
}
static void kvmppc_mmu_pte_flush_all(struct kvm_vcpu *vcpu)
{
struct hpte_cache *pte;
struct hlist_node *node, *tmp;
int i;
for (i = 0; i < HPTEG_HASH_NUM_VPTE_LONG; i++) {
struct hlist_head *list = &vcpu->arch.hpte_hash_vpte_long[i];
hlist_for_each_entry_safe(pte, node, tmp, list, list_vpte_long)
invalidate_pte(vcpu, pte);
}
}
static void kvmppc_mmu_pte_flush_page(struct kvm_vcpu *vcpu, ulong guest_ea)
{
struct hlist_head *list;
struct hlist_node *node, *tmp;
struct hpte_cache *pte;
/* Find the list of entries in the map */
list = &vcpu->arch.hpte_hash_pte[kvmppc_mmu_hash_pte(guest_ea)];
/* Check the list for matching entries and invalidate */
hlist_for_each_entry_safe(pte, node, tmp, list, list_pte)
if ((pte->pte.eaddr & ~0xfffUL) == guest_ea)
invalidate_pte(vcpu, pte);
}
void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong guest_ea, ulong ea_mask)
{
u64 i;
dprintk_mmu("KVM: Flushing %d Shadow PTEs: 0x%lx & 0x%lx\n",
vcpu->arch.hpte_cache_count, guest_ea, ea_mask);
guest_ea &= ea_mask;
switch (ea_mask) {
case ~0xfffUL:
kvmppc_mmu_pte_flush_page(vcpu, guest_ea);
break;
case 0x0ffff000:
/* 32-bit flush w/o segment, go through all possible segments */
for (i = 0; i < 0x100000000ULL; i += 0x10000000ULL)
kvmppc_mmu_pte_flush(vcpu, guest_ea | i, ~0xfffUL);
break;
case 0:
/* Doing a complete flush -> start from scratch */
kvmppc_mmu_pte_flush_all(vcpu);
break;
default:
WARN_ON(1);
break;
}
}
/* Flush with mask 0xfffffffff */
static void kvmppc_mmu_pte_vflush_short(struct kvm_vcpu *vcpu, u64 guest_vp)
{
struct hlist_head *list;
struct hlist_node *node, *tmp;
struct hpte_cache *pte;
u64 vp_mask = 0xfffffffffULL;
list = &vcpu->arch.hpte_hash_vpte[kvmppc_mmu_hash_vpte(guest_vp)];
/* Check the list for matching entries and invalidate */
hlist_for_each_entry_safe(pte, node, tmp, list, list_vpte)
if ((pte->pte.vpage & vp_mask) == guest_vp)
invalidate_pte(vcpu, pte);
}
/* Flush with mask 0xffffff000 */
static void kvmppc_mmu_pte_vflush_long(struct kvm_vcpu *vcpu, u64 guest_vp)
{
struct hlist_head *list;
struct hlist_node *node, *tmp;
struct hpte_cache *pte;
u64 vp_mask = 0xffffff000ULL;
list = &vcpu->arch.hpte_hash_vpte_long[
kvmppc_mmu_hash_vpte_long(guest_vp)];
/* Check the list for matching entries and invalidate */
hlist_for_each_entry_safe(pte, node, tmp, list, list_vpte_long)
if ((pte->pte.vpage & vp_mask) == guest_vp)
invalidate_pte(vcpu, pte);
}
void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 guest_vp, u64 vp_mask)
{
dprintk_mmu("KVM: Flushing %d Shadow vPTEs: 0x%llx & 0x%llx\n",
vcpu->arch.hpte_cache_count, guest_vp, vp_mask);
guest_vp &= vp_mask;
switch(vp_mask) {
case 0xfffffffffULL:
kvmppc_mmu_pte_vflush_short(vcpu, guest_vp);
break;
case 0xffffff000ULL:
kvmppc_mmu_pte_vflush_long(vcpu, guest_vp);
break;
default:
WARN_ON(1);
return;
}
}
void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong pa_end)
{
struct hlist_node *node, *tmp;
struct hpte_cache *pte;
int i;
dprintk_mmu("KVM: Flushing %d Shadow pPTEs: 0x%lx - 0x%lx\n",
vcpu->arch.hpte_cache_count, pa_start, pa_end);
for (i = 0; i < HPTEG_HASH_NUM_VPTE_LONG; i++) {
struct hlist_head *list = &vcpu->arch.hpte_hash_vpte_long[i];
hlist_for_each_entry_safe(pte, node, tmp, list, list_vpte_long)
if ((pte->pte.raddr >= pa_start) &&
(pte->pte.raddr < pa_end))
invalidate_pte(vcpu, pte);
}
}
struct hpte_cache *kvmppc_mmu_hpte_cache_next(struct kvm_vcpu *vcpu)
{
struct hpte_cache *pte;
pte = kmem_cache_zalloc(hpte_cache, GFP_KERNEL);
vcpu->arch.hpte_cache_count++;
if (vcpu->arch.hpte_cache_count == HPTEG_CACHE_NUM)
kvmppc_mmu_pte_flush_all(vcpu);
return pte;
}
void kvmppc_mmu_hpte_destroy(struct kvm_vcpu *vcpu)
{
kvmppc_mmu_pte_flush(vcpu, 0, 0);
}
static void kvmppc_mmu_hpte_init_hash(struct hlist_head *hash_list, int len)
{
int i;
for (i = 0; i < len; i++)
INIT_HLIST_HEAD(&hash_list[i]);
}
int kvmppc_mmu_hpte_init(struct kvm_vcpu *vcpu)
{
/* init hpte lookup hashes */
kvmppc_mmu_hpte_init_hash(vcpu->arch.hpte_hash_pte,
ARRAY_SIZE(vcpu->arch.hpte_hash_pte));
kvmppc_mmu_hpte_init_hash(vcpu->arch.hpte_hash_vpte,
ARRAY_SIZE(vcpu->arch.hpte_hash_vpte));
kvmppc_mmu_hpte_init_hash(vcpu->arch.hpte_hash_vpte_long,
ARRAY_SIZE(vcpu->arch.hpte_hash_vpte_long));
return 0;
}
int kvmppc_mmu_hpte_sysinit(void)
{
/* init hpte slab cache */
hpte_cache = kmem_cache_create("kvm-spt", sizeof(struct hpte_cache),
sizeof(struct hpte_cache), 0, NULL);
return 0;
}
void kvmppc_mmu_hpte_sysexit(void)
{
kmem_cache_destroy(hpte_cache);
}
...@@ -159,10 +159,7 @@ ...@@ -159,10 +159,7 @@
static inline void kvmppc_sync_qpr(struct kvm_vcpu *vcpu, int rt) static inline void kvmppc_sync_qpr(struct kvm_vcpu *vcpu, int rt)
{ {
struct thread_struct t; kvm_cvt_df(&vcpu->arch.fpr[rt], &vcpu->arch.qpr[rt], &vcpu->arch.fpscr);
t.fpscr.val = vcpu->arch.fpscr;
cvt_df((double*)&vcpu->arch.fpr[rt], (float*)&vcpu->arch.qpr[rt], &t);
} }
static void kvmppc_inject_pf(struct kvm_vcpu *vcpu, ulong eaddr, bool is_store) static void kvmppc_inject_pf(struct kvm_vcpu *vcpu, ulong eaddr, bool is_store)
...@@ -183,7 +180,6 @@ static int kvmppc_emulate_fpr_load(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -183,7 +180,6 @@ static int kvmppc_emulate_fpr_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
int rs, ulong addr, int ls_type) int rs, ulong addr, int ls_type)
{ {
int emulated = EMULATE_FAIL; int emulated = EMULATE_FAIL;
struct thread_struct t;
int r; int r;
char tmp[8]; char tmp[8];
int len = sizeof(u32); int len = sizeof(u32);
...@@ -191,8 +187,6 @@ static int kvmppc_emulate_fpr_load(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -191,8 +187,6 @@ static int kvmppc_emulate_fpr_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
if (ls_type == FPU_LS_DOUBLE) if (ls_type == FPU_LS_DOUBLE)
len = sizeof(u64); len = sizeof(u64);
t.fpscr.val = vcpu->arch.fpscr;
/* read from memory */ /* read from memory */
r = kvmppc_ld(vcpu, &addr, len, tmp, true); r = kvmppc_ld(vcpu, &addr, len, tmp, true);
vcpu->arch.paddr_accessed = addr; vcpu->arch.paddr_accessed = addr;
...@@ -210,7 +204,7 @@ static int kvmppc_emulate_fpr_load(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -210,7 +204,7 @@ static int kvmppc_emulate_fpr_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
/* put in registers */ /* put in registers */
switch (ls_type) { switch (ls_type) {
case FPU_LS_SINGLE: case FPU_LS_SINGLE:
cvt_fd((float*)tmp, (double*)&vcpu->arch.fpr[rs], &t); kvm_cvt_fd((u32*)tmp, &vcpu->arch.fpr[rs], &vcpu->arch.fpscr);
vcpu->arch.qpr[rs] = *((u32*)tmp); vcpu->arch.qpr[rs] = *((u32*)tmp);
break; break;
case FPU_LS_DOUBLE: case FPU_LS_DOUBLE:
...@@ -229,17 +223,14 @@ static int kvmppc_emulate_fpr_store(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -229,17 +223,14 @@ static int kvmppc_emulate_fpr_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
int rs, ulong addr, int ls_type) int rs, ulong addr, int ls_type)
{ {
int emulated = EMULATE_FAIL; int emulated = EMULATE_FAIL;
struct thread_struct t;
int r; int r;
char tmp[8]; char tmp[8];
u64 val; u64 val;
int len; int len;
t.fpscr.val = vcpu->arch.fpscr;
switch (ls_type) { switch (ls_type) {
case FPU_LS_SINGLE: case FPU_LS_SINGLE:
cvt_df((double*)&vcpu->arch.fpr[rs], (float*)tmp, &t); kvm_cvt_df(&vcpu->arch.fpr[rs], (u32*)tmp, &vcpu->arch.fpscr);
val = *((u32*)tmp); val = *((u32*)tmp);
len = sizeof(u32); len = sizeof(u32);
break; break;
...@@ -278,13 +269,10 @@ static int kvmppc_emulate_psq_load(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -278,13 +269,10 @@ static int kvmppc_emulate_psq_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
int rs, ulong addr, bool w, int i) int rs, ulong addr, bool w, int i)
{ {
int emulated = EMULATE_FAIL; int emulated = EMULATE_FAIL;
struct thread_struct t;
int r; int r;
float one = 1.0; float one = 1.0;
u32 tmp[2]; u32 tmp[2];
t.fpscr.val = vcpu->arch.fpscr;
/* read from memory */ /* read from memory */
if (w) { if (w) {
r = kvmppc_ld(vcpu, &addr, sizeof(u32), tmp, true); r = kvmppc_ld(vcpu, &addr, sizeof(u32), tmp, true);
...@@ -308,7 +296,7 @@ static int kvmppc_emulate_psq_load(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -308,7 +296,7 @@ static int kvmppc_emulate_psq_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
emulated = EMULATE_DONE; emulated = EMULATE_DONE;
/* put in registers */ /* put in registers */
cvt_fd((float*)&tmp[0], (double*)&vcpu->arch.fpr[rs], &t); kvm_cvt_fd(&tmp[0], &vcpu->arch.fpr[rs], &vcpu->arch.fpscr);
vcpu->arch.qpr[rs] = tmp[1]; vcpu->arch.qpr[rs] = tmp[1];
dprintk(KERN_INFO "KVM: PSQ_LD [0x%x, 0x%x] at 0x%lx (%d)\n", tmp[0], dprintk(KERN_INFO "KVM: PSQ_LD [0x%x, 0x%x] at 0x%lx (%d)\n", tmp[0],
...@@ -322,14 +310,11 @@ static int kvmppc_emulate_psq_store(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -322,14 +310,11 @@ static int kvmppc_emulate_psq_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
int rs, ulong addr, bool w, int i) int rs, ulong addr, bool w, int i)
{ {
int emulated = EMULATE_FAIL; int emulated = EMULATE_FAIL;
struct thread_struct t;
int r; int r;
u32 tmp[2]; u32 tmp[2];
int len = w ? sizeof(u32) : sizeof(u64); int len = w ? sizeof(u32) : sizeof(u64);
t.fpscr.val = vcpu->arch.fpscr; kvm_cvt_df(&vcpu->arch.fpr[rs], &tmp[0], &vcpu->arch.fpscr);
cvt_df((double*)&vcpu->arch.fpr[rs], (float*)&tmp[0], &t);
tmp[1] = vcpu->arch.qpr[rs]; tmp[1] = vcpu->arch.qpr[rs];
r = kvmppc_st(vcpu, &addr, len, tmp, true); r = kvmppc_st(vcpu, &addr, len, tmp, true);
...@@ -517,7 +502,7 @@ static int get_d_signext(u32 inst) ...@@ -517,7 +502,7 @@ static int get_d_signext(u32 inst)
static int kvmppc_ps_three_in(struct kvm_vcpu *vcpu, bool rc, static int kvmppc_ps_three_in(struct kvm_vcpu *vcpu, bool rc,
int reg_out, int reg_in1, int reg_in2, int reg_out, int reg_in1, int reg_in2,
int reg_in3, int scalar, int reg_in3, int scalar,
void (*func)(struct thread_struct *t, void (*func)(u64 *fpscr,
u32 *dst, u32 *src1, u32 *dst, u32 *src1,
u32 *src2, u32 *src3)) u32 *src2, u32 *src3))
{ {
...@@ -526,27 +511,25 @@ static int kvmppc_ps_three_in(struct kvm_vcpu *vcpu, bool rc, ...@@ -526,27 +511,25 @@ static int kvmppc_ps_three_in(struct kvm_vcpu *vcpu, bool rc,
u32 ps0_out; u32 ps0_out;
u32 ps0_in1, ps0_in2, ps0_in3; u32 ps0_in1, ps0_in2, ps0_in3;
u32 ps1_in1, ps1_in2, ps1_in3; u32 ps1_in1, ps1_in2, ps1_in3;
struct thread_struct t;
t.fpscr.val = vcpu->arch.fpscr;
/* RC */ /* RC */
WARN_ON(rc); WARN_ON(rc);
/* PS0 */ /* PS0 */
cvt_df((double*)&fpr[reg_in1], (float*)&ps0_in1, &t); kvm_cvt_df(&fpr[reg_in1], &ps0_in1, &vcpu->arch.fpscr);
cvt_df((double*)&fpr[reg_in2], (float*)&ps0_in2, &t); kvm_cvt_df(&fpr[reg_in2], &ps0_in2, &vcpu->arch.fpscr);
cvt_df((double*)&fpr[reg_in3], (float*)&ps0_in3, &t); kvm_cvt_df(&fpr[reg_in3], &ps0_in3, &vcpu->arch.fpscr);
if (scalar & SCALAR_LOW) if (scalar & SCALAR_LOW)
ps0_in2 = qpr[reg_in2]; ps0_in2 = qpr[reg_in2];
func(&t, &ps0_out, &ps0_in1, &ps0_in2, &ps0_in3); func(&vcpu->arch.fpscr, &ps0_out, &ps0_in1, &ps0_in2, &ps0_in3);
dprintk(KERN_INFO "PS3 ps0 -> f(0x%x, 0x%x, 0x%x) = 0x%x\n", dprintk(KERN_INFO "PS3 ps0 -> f(0x%x, 0x%x, 0x%x) = 0x%x\n",
ps0_in1, ps0_in2, ps0_in3, ps0_out); ps0_in1, ps0_in2, ps0_in3, ps0_out);
if (!(scalar & SCALAR_NO_PS0)) if (!(scalar & SCALAR_NO_PS0))
cvt_fd((float*)&ps0_out, (double*)&fpr[reg_out], &t); kvm_cvt_fd(&ps0_out, &fpr[reg_out], &vcpu->arch.fpscr);
/* PS1 */ /* PS1 */
ps1_in1 = qpr[reg_in1]; ps1_in1 = qpr[reg_in1];
...@@ -557,7 +540,7 @@ static int kvmppc_ps_three_in(struct kvm_vcpu *vcpu, bool rc, ...@@ -557,7 +540,7 @@ static int kvmppc_ps_three_in(struct kvm_vcpu *vcpu, bool rc,
ps1_in2 = ps0_in2; ps1_in2 = ps0_in2;
if (!(scalar & SCALAR_NO_PS1)) if (!(scalar & SCALAR_NO_PS1))
func(&t, &qpr[reg_out], &ps1_in1, &ps1_in2, &ps1_in3); func(&vcpu->arch.fpscr, &qpr[reg_out], &ps1_in1, &ps1_in2, &ps1_in3);
dprintk(KERN_INFO "PS3 ps1 -> f(0x%x, 0x%x, 0x%x) = 0x%x\n", dprintk(KERN_INFO "PS3 ps1 -> f(0x%x, 0x%x, 0x%x) = 0x%x\n",
ps1_in1, ps1_in2, ps1_in3, qpr[reg_out]); ps1_in1, ps1_in2, ps1_in3, qpr[reg_out]);
...@@ -568,7 +551,7 @@ static int kvmppc_ps_three_in(struct kvm_vcpu *vcpu, bool rc, ...@@ -568,7 +551,7 @@ static int kvmppc_ps_three_in(struct kvm_vcpu *vcpu, bool rc,
static int kvmppc_ps_two_in(struct kvm_vcpu *vcpu, bool rc, static int kvmppc_ps_two_in(struct kvm_vcpu *vcpu, bool rc,
int reg_out, int reg_in1, int reg_in2, int reg_out, int reg_in1, int reg_in2,
int scalar, int scalar,
void (*func)(struct thread_struct *t, void (*func)(u64 *fpscr,
u32 *dst, u32 *src1, u32 *dst, u32 *src1,
u32 *src2)) u32 *src2))
{ {
...@@ -578,27 +561,25 @@ static int kvmppc_ps_two_in(struct kvm_vcpu *vcpu, bool rc, ...@@ -578,27 +561,25 @@ static int kvmppc_ps_two_in(struct kvm_vcpu *vcpu, bool rc,
u32 ps0_in1, ps0_in2; u32 ps0_in1, ps0_in2;
u32 ps1_out; u32 ps1_out;
u32 ps1_in1, ps1_in2; u32 ps1_in1, ps1_in2;
struct thread_struct t;
t.fpscr.val = vcpu->arch.fpscr;
/* RC */ /* RC */
WARN_ON(rc); WARN_ON(rc);
/* PS0 */ /* PS0 */
cvt_df((double*)&fpr[reg_in1], (float*)&ps0_in1, &t); kvm_cvt_df(&fpr[reg_in1], &ps0_in1, &vcpu->arch.fpscr);
if (scalar & SCALAR_LOW) if (scalar & SCALAR_LOW)
ps0_in2 = qpr[reg_in2]; ps0_in2 = qpr[reg_in2];
else else
cvt_df((double*)&fpr[reg_in2], (float*)&ps0_in2, &t); kvm_cvt_df(&fpr[reg_in2], &ps0_in2, &vcpu->arch.fpscr);
func(&t, &ps0_out, &ps0_in1, &ps0_in2); func(&vcpu->arch.fpscr, &ps0_out, &ps0_in1, &ps0_in2);
if (!(scalar & SCALAR_NO_PS0)) { if (!(scalar & SCALAR_NO_PS0)) {
dprintk(KERN_INFO "PS2 ps0 -> f(0x%x, 0x%x) = 0x%x\n", dprintk(KERN_INFO "PS2 ps0 -> f(0x%x, 0x%x) = 0x%x\n",
ps0_in1, ps0_in2, ps0_out); ps0_in1, ps0_in2, ps0_out);
cvt_fd((float*)&ps0_out, (double*)&fpr[reg_out], &t); kvm_cvt_fd(&ps0_out, &fpr[reg_out], &vcpu->arch.fpscr);
} }
/* PS1 */ /* PS1 */
...@@ -608,7 +589,7 @@ static int kvmppc_ps_two_in(struct kvm_vcpu *vcpu, bool rc, ...@@ -608,7 +589,7 @@ static int kvmppc_ps_two_in(struct kvm_vcpu *vcpu, bool rc,
if (scalar & SCALAR_HIGH) if (scalar & SCALAR_HIGH)
ps1_in2 = ps0_in2; ps1_in2 = ps0_in2;
func(&t, &ps1_out, &ps1_in1, &ps1_in2); func(&vcpu->arch.fpscr, &ps1_out, &ps1_in1, &ps1_in2);
if (!(scalar & SCALAR_NO_PS1)) { if (!(scalar & SCALAR_NO_PS1)) {
qpr[reg_out] = ps1_out; qpr[reg_out] = ps1_out;
...@@ -622,31 +603,29 @@ static int kvmppc_ps_two_in(struct kvm_vcpu *vcpu, bool rc, ...@@ -622,31 +603,29 @@ static int kvmppc_ps_two_in(struct kvm_vcpu *vcpu, bool rc,
static int kvmppc_ps_one_in(struct kvm_vcpu *vcpu, bool rc, static int kvmppc_ps_one_in(struct kvm_vcpu *vcpu, bool rc,
int reg_out, int reg_in, int reg_out, int reg_in,
void (*func)(struct thread_struct *t, void (*func)(u64 *t,
u32 *dst, u32 *src1)) u32 *dst, u32 *src1))
{ {
u32 *qpr = vcpu->arch.qpr; u32 *qpr = vcpu->arch.qpr;
u64 *fpr = vcpu->arch.fpr; u64 *fpr = vcpu->arch.fpr;
u32 ps0_out, ps0_in; u32 ps0_out, ps0_in;
u32 ps1_in; u32 ps1_in;
struct thread_struct t;
t.fpscr.val = vcpu->arch.fpscr;
/* RC */ /* RC */
WARN_ON(rc); WARN_ON(rc);
/* PS0 */ /* PS0 */
cvt_df((double*)&fpr[reg_in], (float*)&ps0_in, &t); kvm_cvt_df(&fpr[reg_in], &ps0_in, &vcpu->arch.fpscr);
func(&t, &ps0_out, &ps0_in); func(&vcpu->arch.fpscr, &ps0_out, &ps0_in);
dprintk(KERN_INFO "PS1 ps0 -> f(0x%x) = 0x%x\n", dprintk(KERN_INFO "PS1 ps0 -> f(0x%x) = 0x%x\n",
ps0_in, ps0_out); ps0_in, ps0_out);
cvt_fd((float*)&ps0_out, (double*)&fpr[reg_out], &t); kvm_cvt_fd(&ps0_out, &fpr[reg_out], &vcpu->arch.fpscr);
/* PS1 */ /* PS1 */
ps1_in = qpr[reg_in]; ps1_in = qpr[reg_in];
func(&t, &qpr[reg_out], &ps1_in); func(&vcpu->arch.fpscr, &qpr[reg_out], &ps1_in);
dprintk(KERN_INFO "PS1 ps1 -> f(0x%x) = 0x%x\n", dprintk(KERN_INFO "PS1 ps1 -> f(0x%x) = 0x%x\n",
ps1_in, qpr[reg_out]); ps1_in, qpr[reg_out]);
...@@ -672,13 +651,10 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu) ...@@ -672,13 +651,10 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
bool rcomp = (inst & 1) ? true : false; bool rcomp = (inst & 1) ? true : false;
u32 cr = kvmppc_get_cr(vcpu); u32 cr = kvmppc_get_cr(vcpu);
struct thread_struct t;
#ifdef DEBUG #ifdef DEBUG
int i; int i;
#endif #endif
t.fpscr.val = vcpu->arch.fpscr;
if (!kvmppc_inst_is_paired_single(vcpu, inst)) if (!kvmppc_inst_is_paired_single(vcpu, inst))
return EMULATE_FAIL; return EMULATE_FAIL;
...@@ -695,7 +671,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu) ...@@ -695,7 +671,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
#ifdef DEBUG #ifdef DEBUG
for (i = 0; i < ARRAY_SIZE(vcpu->arch.fpr); i++) { for (i = 0; i < ARRAY_SIZE(vcpu->arch.fpr); i++) {
u32 f; u32 f;
cvt_df((double*)&vcpu->arch.fpr[i], (float*)&f, &t); kvm_cvt_df(&vcpu->arch.fpr[i], &f, &vcpu->arch.fpscr);
dprintk(KERN_INFO "FPR[%d] = 0x%x / 0x%llx QPR[%d] = 0x%x\n", dprintk(KERN_INFO "FPR[%d] = 0x%x / 0x%llx QPR[%d] = 0x%x\n",
i, f, vcpu->arch.fpr[i], i, vcpu->arch.qpr[i]); i, f, vcpu->arch.fpr[i], i, vcpu->arch.qpr[i]);
} }
...@@ -819,8 +795,9 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu) ...@@ -819,8 +795,9 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
WARN_ON(rcomp); WARN_ON(rcomp);
vcpu->arch.fpr[ax_rd] = vcpu->arch.fpr[ax_ra]; vcpu->arch.fpr[ax_rd] = vcpu->arch.fpr[ax_ra];
/* vcpu->arch.qpr[ax_rd] = vcpu->arch.fpr[ax_rb]; */ /* vcpu->arch.qpr[ax_rd] = vcpu->arch.fpr[ax_rb]; */
cvt_df((double*)&vcpu->arch.fpr[ax_rb], kvm_cvt_df(&vcpu->arch.fpr[ax_rb],
(float*)&vcpu->arch.qpr[ax_rd], &t); &vcpu->arch.qpr[ax_rd],
&vcpu->arch.fpscr);
break; break;
case OP_4X_PS_MERGE01: case OP_4X_PS_MERGE01:
WARN_ON(rcomp); WARN_ON(rcomp);
...@@ -830,17 +807,20 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu) ...@@ -830,17 +807,20 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
case OP_4X_PS_MERGE10: case OP_4X_PS_MERGE10:
WARN_ON(rcomp); WARN_ON(rcomp);
/* vcpu->arch.fpr[ax_rd] = vcpu->arch.qpr[ax_ra]; */ /* vcpu->arch.fpr[ax_rd] = vcpu->arch.qpr[ax_ra]; */
cvt_fd((float*)&vcpu->arch.qpr[ax_ra], kvm_cvt_fd(&vcpu->arch.qpr[ax_ra],
(double*)&vcpu->arch.fpr[ax_rd], &t); &vcpu->arch.fpr[ax_rd],
&vcpu->arch.fpscr);
/* vcpu->arch.qpr[ax_rd] = vcpu->arch.fpr[ax_rb]; */ /* vcpu->arch.qpr[ax_rd] = vcpu->arch.fpr[ax_rb]; */
cvt_df((double*)&vcpu->arch.fpr[ax_rb], kvm_cvt_df(&vcpu->arch.fpr[ax_rb],
(float*)&vcpu->arch.qpr[ax_rd], &t); &vcpu->arch.qpr[ax_rd],
&vcpu->arch.fpscr);
break; break;
case OP_4X_PS_MERGE11: case OP_4X_PS_MERGE11:
WARN_ON(rcomp); WARN_ON(rcomp);
/* vcpu->arch.fpr[ax_rd] = vcpu->arch.qpr[ax_ra]; */ /* vcpu->arch.fpr[ax_rd] = vcpu->arch.qpr[ax_ra]; */
cvt_fd((float*)&vcpu->arch.qpr[ax_ra], kvm_cvt_fd(&vcpu->arch.qpr[ax_ra],
(double*)&vcpu->arch.fpr[ax_rd], &t); &vcpu->arch.fpr[ax_rd],
&vcpu->arch.fpscr);
vcpu->arch.qpr[ax_rd] = vcpu->arch.qpr[ax_rb]; vcpu->arch.qpr[ax_rd] = vcpu->arch.qpr[ax_rb];
break; break;
} }
...@@ -1275,7 +1255,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu) ...@@ -1275,7 +1255,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
#ifdef DEBUG #ifdef DEBUG
for (i = 0; i < ARRAY_SIZE(vcpu->arch.fpr); i++) { for (i = 0; i < ARRAY_SIZE(vcpu->arch.fpr); i++) {
u32 f; u32 f;
cvt_df((double*)&vcpu->arch.fpr[i], (float*)&f, &t); kvm_cvt_df(&vcpu->arch.fpr[i], &f, &vcpu->arch.fpscr);
dprintk(KERN_INFO "FPR[%d] = 0x%x\n", i, f); dprintk(KERN_INFO "FPR[%d] = 0x%x\n", i, f);
} }
#endif #endif
......
...@@ -144,7 +144,7 @@ static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu, ...@@ -144,7 +144,7 @@ static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu,
unsigned int priority) unsigned int priority)
{ {
int allowed = 0; int allowed = 0;
ulong msr_mask; ulong uninitialized_var(msr_mask);
bool update_esr = false, update_dear = false; bool update_esr = false, update_dear = false;
switch (priority) { switch (priority) {
...@@ -485,8 +485,6 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -485,8 +485,6 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
{ {
int i; int i;
vcpu_load(vcpu);
regs->pc = vcpu->arch.pc; regs->pc = vcpu->arch.pc;
regs->cr = kvmppc_get_cr(vcpu); regs->cr = kvmppc_get_cr(vcpu);
regs->ctr = vcpu->arch.ctr; regs->ctr = vcpu->arch.ctr;
...@@ -507,8 +505,6 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -507,8 +505,6 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
for (i = 0; i < ARRAY_SIZE(regs->gpr); i++) for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)
regs->gpr[i] = kvmppc_get_gpr(vcpu, i); regs->gpr[i] = kvmppc_get_gpr(vcpu, i);
vcpu_put(vcpu);
return 0; return 0;
} }
...@@ -516,8 +512,6 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -516,8 +512,6 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
{ {
int i; int i;
vcpu_load(vcpu);
vcpu->arch.pc = regs->pc; vcpu->arch.pc = regs->pc;
kvmppc_set_cr(vcpu, regs->cr); kvmppc_set_cr(vcpu, regs->cr);
vcpu->arch.ctr = regs->ctr; vcpu->arch.ctr = regs->ctr;
...@@ -537,8 +531,6 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -537,8 +531,6 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
for (i = 0; i < ARRAY_SIZE(regs->gpr); i++) for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)
kvmppc_set_gpr(vcpu, i, regs->gpr[i]); kvmppc_set_gpr(vcpu, i, regs->gpr[i]);
vcpu_put(vcpu);
return 0; return 0;
} }
...@@ -569,9 +561,7 @@ int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu, ...@@ -569,9 +561,7 @@ int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
{ {
int r; int r;
vcpu_load(vcpu);
r = kvmppc_core_vcpu_translate(vcpu, tr); r = kvmppc_core_vcpu_translate(vcpu, tr);
vcpu_put(vcpu);
return r; return r;
} }
......
...@@ -271,3 +271,21 @@ FPD_THREE_IN(fmsub) ...@@ -271,3 +271,21 @@ FPD_THREE_IN(fmsub)
FPD_THREE_IN(fmadd) FPD_THREE_IN(fmadd)
FPD_THREE_IN(fnmsub) FPD_THREE_IN(fnmsub)
FPD_THREE_IN(fnmadd) FPD_THREE_IN(fnmadd)
_GLOBAL(kvm_cvt_fd)
lfd 0,0(r5) /* load up fpscr value */
MTFSF_L(0)
lfs 0,0(r3)
stfd 0,0(r4)
mffs 0
stfd 0,0(r5) /* save new fpscr value */
blr
_GLOBAL(kvm_cvt_df)
lfd 0,0(r5) /* load up fpscr value */
MTFSF_L(0)
lfd 0,0(r3)
stfs 0,0(r4)
mffs 0
stfd 0,0(r5) /* save new fpscr value */
blr
...@@ -36,11 +36,6 @@ ...@@ -36,11 +36,6 @@
#define CREATE_TRACE_POINTS #define CREATE_TRACE_POINTS
#include "trace.h" #include "trace.h"
gfn_t unalias_gfn(struct kvm *kvm, gfn_t gfn)
{
return gfn;
}
int kvm_arch_vcpu_runnable(struct kvm_vcpu *v) int kvm_arch_vcpu_runnable(struct kvm_vcpu *v)
{ {
return !(v->arch.msr & MSR_WE) || !!(v->arch.pending_exceptions); return !(v->arch.msr & MSR_WE) || !!(v->arch.pending_exceptions);
...@@ -287,7 +282,7 @@ static void kvmppc_complete_dcr_load(struct kvm_vcpu *vcpu, ...@@ -287,7 +282,7 @@ static void kvmppc_complete_dcr_load(struct kvm_vcpu *vcpu,
static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu, static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
struct kvm_run *run) struct kvm_run *run)
{ {
u64 gpr; u64 uninitialized_var(gpr);
if (run->mmio.len > sizeof(gpr)) { if (run->mmio.len > sizeof(gpr)) {
printk(KERN_ERR "bad MMIO length: %d\n", run->mmio.len); printk(KERN_ERR "bad MMIO length: %d\n", run->mmio.len);
...@@ -423,8 +418,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) ...@@ -423,8 +418,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
int r; int r;
sigset_t sigsaved; sigset_t sigsaved;
vcpu_load(vcpu);
if (vcpu->sigset_active) if (vcpu->sigset_active)
sigprocmask(SIG_SETMASK, &vcpu->sigset, &sigsaved); sigprocmask(SIG_SETMASK, &vcpu->sigset, &sigsaved);
...@@ -456,8 +449,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) ...@@ -456,8 +449,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
if (vcpu->sigset_active) if (vcpu->sigset_active)
sigprocmask(SIG_SETMASK, &sigsaved, NULL); sigprocmask(SIG_SETMASK, &sigsaved, NULL);
vcpu_put(vcpu);
return r; return r;
} }
...@@ -523,8 +514,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp, ...@@ -523,8 +514,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
if (copy_from_user(&irq, argp, sizeof(irq))) if (copy_from_user(&irq, argp, sizeof(irq)))
goto out; goto out;
r = kvm_vcpu_ioctl_interrupt(vcpu, &irq); r = kvm_vcpu_ioctl_interrupt(vcpu, &irq);
break; goto out;
} }
case KVM_ENABLE_CAP: case KVM_ENABLE_CAP:
{ {
struct kvm_enable_cap cap; struct kvm_enable_cap cap;
......
...@@ -26,7 +26,7 @@ ...@@ -26,7 +26,7 @@
struct sca_entry { struct sca_entry {
atomic_t scn; atomic_t scn;
__u64 reserved; __u32 reserved;
__u64 sda; __u64 sda;
__u64 reserved2[2]; __u64 reserved2[2];
} __attribute__((packed)); } __attribute__((packed));
...@@ -41,7 +41,8 @@ struct sca_block { ...@@ -41,7 +41,8 @@ struct sca_block {
} __attribute__((packed)); } __attribute__((packed));
#define KVM_NR_PAGE_SIZES 2 #define KVM_NR_PAGE_SIZES 2
#define KVM_HPAGE_SHIFT(x) (PAGE_SHIFT + ((x) - 1) * 8) #define KVM_HPAGE_GFN_SHIFT(x) (((x) - 1) * 8)
#define KVM_HPAGE_SHIFT(x) (PAGE_SHIFT + KVM_HPAGE_GFN_SHIFT(x))
#define KVM_HPAGE_SIZE(x) (1UL << KVM_HPAGE_SHIFT(x)) #define KVM_HPAGE_SIZE(x) (1UL << KVM_HPAGE_SHIFT(x))
#define KVM_HPAGE_MASK(x) (~(KVM_HPAGE_SIZE(x) - 1)) #define KVM_HPAGE_MASK(x) (~(KVM_HPAGE_SIZE(x) - 1))
#define KVM_PAGES_PER_HPAGE(x) (KVM_HPAGE_SIZE(x) / PAGE_SIZE) #define KVM_PAGES_PER_HPAGE(x) (KVM_HPAGE_SIZE(x) / PAGE_SIZE)
......
...@@ -135,7 +135,7 @@ static int handle_stop(struct kvm_vcpu *vcpu) ...@@ -135,7 +135,7 @@ static int handle_stop(struct kvm_vcpu *vcpu)
spin_lock_bh(&vcpu->arch.local_int.lock); spin_lock_bh(&vcpu->arch.local_int.lock);
if (vcpu->arch.local_int.action_bits & ACTION_STORE_ON_STOP) { if (vcpu->arch.local_int.action_bits & ACTION_STORE_ON_STOP) {
vcpu->arch.local_int.action_bits &= ~ACTION_STORE_ON_STOP; vcpu->arch.local_int.action_bits &= ~ACTION_STORE_ON_STOP;
rc = __kvm_s390_vcpu_store_status(vcpu, rc = kvm_s390_vcpu_store_status(vcpu,
KVM_S390_STORE_STATUS_NOADDR); KVM_S390_STORE_STATUS_NOADDR);
if (rc >= 0) if (rc >= 0)
rc = -EOPNOTSUPP; rc = -EOPNOTSUPP;
......
...@@ -207,6 +207,7 @@ struct kvm *kvm_arch_create_vm(void) ...@@ -207,6 +207,7 @@ struct kvm *kvm_arch_create_vm(void)
void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
{ {
VCPU_EVENT(vcpu, 3, "%s", "free cpu"); VCPU_EVENT(vcpu, 3, "%s", "free cpu");
clear_bit(63 - vcpu->vcpu_id, (unsigned long *) &vcpu->kvm->arch.sca->mcn);
if (vcpu->kvm->arch.sca->cpu[vcpu->vcpu_id].sda == if (vcpu->kvm->arch.sca->cpu[vcpu->vcpu_id].sda ==
(__u64) vcpu->arch.sie_block) (__u64) vcpu->arch.sie_block)
vcpu->kvm->arch.sca->cpu[vcpu->vcpu_id].sda = 0; vcpu->kvm->arch.sca->cpu[vcpu->vcpu_id].sda = 0;
...@@ -296,7 +297,7 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu) ...@@ -296,7 +297,7 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
{ {
atomic_set(&vcpu->arch.sie_block->cpuflags, CPUSTAT_ZARCH); atomic_set(&vcpu->arch.sie_block->cpuflags, CPUSTAT_ZARCH);
set_bit(KVM_REQ_MMU_RELOAD, &vcpu->requests); set_bit(KVM_REQ_MMU_RELOAD, &vcpu->requests);
vcpu->arch.sie_block->ecb = 2; vcpu->arch.sie_block->ecb = 6;
vcpu->arch.sie_block->eca = 0xC1002001U; vcpu->arch.sie_block->eca = 0xC1002001U;
vcpu->arch.sie_block->fac = (int) (long) facilities; vcpu->arch.sie_block->fac = (int) (long) facilities;
hrtimer_init(&vcpu->arch.ckc_timer, CLOCK_REALTIME, HRTIMER_MODE_ABS); hrtimer_init(&vcpu->arch.ckc_timer, CLOCK_REALTIME, HRTIMER_MODE_ABS);
...@@ -329,6 +330,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, ...@@ -329,6 +330,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
kvm->arch.sca->cpu[id].sda = (__u64) vcpu->arch.sie_block; kvm->arch.sca->cpu[id].sda = (__u64) vcpu->arch.sie_block;
vcpu->arch.sie_block->scaoh = (__u32)(((__u64)kvm->arch.sca) >> 32); vcpu->arch.sie_block->scaoh = (__u32)(((__u64)kvm->arch.sca) >> 32);
vcpu->arch.sie_block->scaol = (__u32)(__u64)kvm->arch.sca; vcpu->arch.sie_block->scaol = (__u32)(__u64)kvm->arch.sca;
set_bit(63 - id, (unsigned long *) &kvm->arch.sca->mcn);
spin_lock_init(&vcpu->arch.local_int.lock); spin_lock_init(&vcpu->arch.local_int.lock);
INIT_LIST_HEAD(&vcpu->arch.local_int.list); INIT_LIST_HEAD(&vcpu->arch.local_int.list);
...@@ -363,63 +365,49 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) ...@@ -363,63 +365,49 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
static int kvm_arch_vcpu_ioctl_initial_reset(struct kvm_vcpu *vcpu) static int kvm_arch_vcpu_ioctl_initial_reset(struct kvm_vcpu *vcpu)
{ {
vcpu_load(vcpu);
kvm_s390_vcpu_initial_reset(vcpu); kvm_s390_vcpu_initial_reset(vcpu);
vcpu_put(vcpu);
return 0; return 0;
} }
int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
{ {
vcpu_load(vcpu);
memcpy(&vcpu->arch.guest_gprs, &regs->gprs, sizeof(regs->gprs)); memcpy(&vcpu->arch.guest_gprs, &regs->gprs, sizeof(regs->gprs));
vcpu_put(vcpu);
return 0; return 0;
} }
int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
{ {
vcpu_load(vcpu);
memcpy(&regs->gprs, &vcpu->arch.guest_gprs, sizeof(regs->gprs)); memcpy(&regs->gprs, &vcpu->arch.guest_gprs, sizeof(regs->gprs));
vcpu_put(vcpu);
return 0; return 0;
} }
int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
struct kvm_sregs *sregs) struct kvm_sregs *sregs)
{ {
vcpu_load(vcpu);
memcpy(&vcpu->arch.guest_acrs, &sregs->acrs, sizeof(sregs->acrs)); memcpy(&vcpu->arch.guest_acrs, &sregs->acrs, sizeof(sregs->acrs));
memcpy(&vcpu->arch.sie_block->gcr, &sregs->crs, sizeof(sregs->crs)); memcpy(&vcpu->arch.sie_block->gcr, &sregs->crs, sizeof(sregs->crs));
vcpu_put(vcpu);
return 0; return 0;
} }
int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu, int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
struct kvm_sregs *sregs) struct kvm_sregs *sregs)
{ {
vcpu_load(vcpu);
memcpy(&sregs->acrs, &vcpu->arch.guest_acrs, sizeof(sregs->acrs)); memcpy(&sregs->acrs, &vcpu->arch.guest_acrs, sizeof(sregs->acrs));
memcpy(&sregs->crs, &vcpu->arch.sie_block->gcr, sizeof(sregs->crs)); memcpy(&sregs->crs, &vcpu->arch.sie_block->gcr, sizeof(sregs->crs));
vcpu_put(vcpu);
return 0; return 0;
} }
int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
{ {
vcpu_load(vcpu);
memcpy(&vcpu->arch.guest_fpregs.fprs, &fpu->fprs, sizeof(fpu->fprs)); memcpy(&vcpu->arch.guest_fpregs.fprs, &fpu->fprs, sizeof(fpu->fprs));
vcpu->arch.guest_fpregs.fpc = fpu->fpc; vcpu->arch.guest_fpregs.fpc = fpu->fpc;
vcpu_put(vcpu);
return 0; return 0;
} }
int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
{ {
vcpu_load(vcpu);
memcpy(&fpu->fprs, &vcpu->arch.guest_fpregs.fprs, sizeof(fpu->fprs)); memcpy(&fpu->fprs, &vcpu->arch.guest_fpregs.fprs, sizeof(fpu->fprs));
fpu->fpc = vcpu->arch.guest_fpregs.fpc; fpu->fpc = vcpu->arch.guest_fpregs.fpc;
vcpu_put(vcpu);
return 0; return 0;
} }
...@@ -427,14 +415,12 @@ static int kvm_arch_vcpu_ioctl_set_initial_psw(struct kvm_vcpu *vcpu, psw_t psw) ...@@ -427,14 +415,12 @@ static int kvm_arch_vcpu_ioctl_set_initial_psw(struct kvm_vcpu *vcpu, psw_t psw)
{ {
int rc = 0; int rc = 0;
vcpu_load(vcpu);
if (atomic_read(&vcpu->arch.sie_block->cpuflags) & CPUSTAT_RUNNING) if (atomic_read(&vcpu->arch.sie_block->cpuflags) & CPUSTAT_RUNNING)
rc = -EBUSY; rc = -EBUSY;
else { else {
vcpu->run->psw_mask = psw.mask; vcpu->run->psw_mask = psw.mask;
vcpu->run->psw_addr = psw.addr; vcpu->run->psw_addr = psw.addr;
} }
vcpu_put(vcpu);
return rc; return rc;
} }
...@@ -498,8 +484,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) ...@@ -498,8 +484,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
int rc; int rc;
sigset_t sigsaved; sigset_t sigsaved;
vcpu_load(vcpu);
rerun_vcpu: rerun_vcpu:
if (vcpu->requests) if (vcpu->requests)
if (test_and_clear_bit(KVM_REQ_MMU_RELOAD, &vcpu->requests)) if (test_and_clear_bit(KVM_REQ_MMU_RELOAD, &vcpu->requests))
...@@ -568,8 +552,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) ...@@ -568,8 +552,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
if (vcpu->sigset_active) if (vcpu->sigset_active)
sigprocmask(SIG_SETMASK, &sigsaved, NULL); sigprocmask(SIG_SETMASK, &sigsaved, NULL);
vcpu_put(vcpu);
vcpu->stat.exit_userspace++; vcpu->stat.exit_userspace++;
return rc; return rc;
} }
...@@ -589,7 +571,7 @@ static int __guestcopy(struct kvm_vcpu *vcpu, u64 guestdest, const void *from, ...@@ -589,7 +571,7 @@ static int __guestcopy(struct kvm_vcpu *vcpu, u64 guestdest, const void *from,
* KVM_S390_STORE_STATUS_NOADDR: -> 0x1200 on 64 bit * KVM_S390_STORE_STATUS_NOADDR: -> 0x1200 on 64 bit
* KVM_S390_STORE_STATUS_PREFIXED: -> prefix * KVM_S390_STORE_STATUS_PREFIXED: -> prefix
*/ */
int __kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr) int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr)
{ {
const unsigned char archmode = 1; const unsigned char archmode = 1;
int prefix; int prefix;
...@@ -651,45 +633,42 @@ int __kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr) ...@@ -651,45 +633,42 @@ int __kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr)
return 0; return 0;
} }
static int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr)
{
int rc;
vcpu_load(vcpu);
rc = __kvm_s390_vcpu_store_status(vcpu, addr);
vcpu_put(vcpu);
return rc;
}
long kvm_arch_vcpu_ioctl(struct file *filp, long kvm_arch_vcpu_ioctl(struct file *filp,
unsigned int ioctl, unsigned long arg) unsigned int ioctl, unsigned long arg)
{ {
struct kvm_vcpu *vcpu = filp->private_data; struct kvm_vcpu *vcpu = filp->private_data;
void __user *argp = (void __user *)arg; void __user *argp = (void __user *)arg;
long r;
switch (ioctl) { switch (ioctl) {
case KVM_S390_INTERRUPT: { case KVM_S390_INTERRUPT: {
struct kvm_s390_interrupt s390int; struct kvm_s390_interrupt s390int;
r = -EFAULT;
if (copy_from_user(&s390int, argp, sizeof(s390int))) if (copy_from_user(&s390int, argp, sizeof(s390int)))
return -EFAULT; break;
return kvm_s390_inject_vcpu(vcpu, &s390int); r = kvm_s390_inject_vcpu(vcpu, &s390int);
break;
} }
case KVM_S390_STORE_STATUS: case KVM_S390_STORE_STATUS:
return kvm_s390_vcpu_store_status(vcpu, arg); r = kvm_s390_vcpu_store_status(vcpu, arg);
break;
case KVM_S390_SET_INITIAL_PSW: { case KVM_S390_SET_INITIAL_PSW: {
psw_t psw; psw_t psw;
r = -EFAULT;
if (copy_from_user(&psw, argp, sizeof(psw))) if (copy_from_user(&psw, argp, sizeof(psw)))
return -EFAULT; break;
return kvm_arch_vcpu_ioctl_set_initial_psw(vcpu, psw); r = kvm_arch_vcpu_ioctl_set_initial_psw(vcpu, psw);
break;
} }
case KVM_S390_INITIAL_RESET: case KVM_S390_INITIAL_RESET:
return kvm_arch_vcpu_ioctl_initial_reset(vcpu); r = kvm_arch_vcpu_ioctl_initial_reset(vcpu);
break;
default: default:
; r = -EINVAL;
} }
return -EINVAL; return r;
} }
/* Section: memory related */ /* Section: memory related */
...@@ -744,11 +723,6 @@ void kvm_arch_flush_shadow(struct kvm *kvm) ...@@ -744,11 +723,6 @@ void kvm_arch_flush_shadow(struct kvm *kvm)
{ {
} }
gfn_t unalias_gfn(struct kvm *kvm, gfn_t gfn)
{
return gfn;
}
static int __init kvm_s390_init(void) static int __init kvm_s390_init(void)
{ {
int ret; int ret;
......
...@@ -92,7 +92,7 @@ int kvm_s390_handle_b2(struct kvm_vcpu *vcpu); ...@@ -92,7 +92,7 @@ int kvm_s390_handle_b2(struct kvm_vcpu *vcpu);
int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu); int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu);
/* implemented in kvm-s390.c */ /* implemented in kvm-s390.c */
int __kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu,
unsigned long addr); unsigned long addr);
/* implemented in diag.c */ /* implemented in diag.c */
int kvm_s390_handle_diag(struct kvm_vcpu *vcpu); int kvm_s390_handle_diag(struct kvm_vcpu *vcpu);
......
...@@ -482,6 +482,8 @@ static inline void fpu_copy(struct fpu *dst, struct fpu *src) ...@@ -482,6 +482,8 @@ static inline void fpu_copy(struct fpu *dst, struct fpu *src)
memcpy(dst->state, src->state, xstate_size); memcpy(dst->state, src->state, xstate_size);
} }
extern void fpu_finit(struct fpu *fpu);
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#define PSHUFB_XMM5_XMM0 .byte 0x66, 0x0f, 0x38, 0x00, 0xc5 #define PSHUFB_XMM5_XMM0 .byte 0x66, 0x0f, 0x38, 0x00, 0xc5
......
...@@ -22,6 +22,8 @@ ...@@ -22,6 +22,8 @@
#define __KVM_HAVE_XEN_HVM #define __KVM_HAVE_XEN_HVM
#define __KVM_HAVE_VCPU_EVENTS #define __KVM_HAVE_VCPU_EVENTS
#define __KVM_HAVE_DEBUGREGS #define __KVM_HAVE_DEBUGREGS
#define __KVM_HAVE_XSAVE
#define __KVM_HAVE_XCRS
/* Architectural interrupt line count. */ /* Architectural interrupt line count. */
#define KVM_NR_INTERRUPTS 256 #define KVM_NR_INTERRUPTS 256
...@@ -299,4 +301,24 @@ struct kvm_debugregs { ...@@ -299,4 +301,24 @@ struct kvm_debugregs {
__u64 reserved[9]; __u64 reserved[9];
}; };
/* for KVM_CAP_XSAVE */
struct kvm_xsave {
__u32 region[1024];
};
#define KVM_MAX_XCRS 16
struct kvm_xcr {
__u32 xcr;
__u32 reserved;
__u64 value;
};
struct kvm_xcrs {
__u32 nr_xcrs;
__u32 flags;
struct kvm_xcr xcrs[KVM_MAX_XCRS];
__u64 padding[16];
};
#endif /* _ASM_X86_KVM_H */ #endif /* _ASM_X86_KVM_H */
...@@ -51,8 +51,10 @@ struct x86_emulate_ctxt; ...@@ -51,8 +51,10 @@ struct x86_emulate_ctxt;
#define X86EMUL_UNHANDLEABLE 1 #define X86EMUL_UNHANDLEABLE 1
/* Terminate emulation but return success to the caller. */ /* Terminate emulation but return success to the caller. */
#define X86EMUL_PROPAGATE_FAULT 2 /* propagate a generated fault to guest */ #define X86EMUL_PROPAGATE_FAULT 2 /* propagate a generated fault to guest */
#define X86EMUL_RETRY_INSTR 2 /* retry the instruction for some reason */ #define X86EMUL_RETRY_INSTR 3 /* retry the instruction for some reason */
#define X86EMUL_CMPXCHG_FAILED 2 /* cmpxchg did not see expected value */ #define X86EMUL_CMPXCHG_FAILED 4 /* cmpxchg did not see expected value */
#define X86EMUL_IO_NEEDED 5 /* IO is needed to complete emulation */
struct x86_emulate_ops { struct x86_emulate_ops {
/* /*
* read_std: Read bytes of standard (non-emulated/special) memory. * read_std: Read bytes of standard (non-emulated/special) memory.
...@@ -92,6 +94,7 @@ struct x86_emulate_ops { ...@@ -92,6 +94,7 @@ struct x86_emulate_ops {
int (*read_emulated)(unsigned long addr, int (*read_emulated)(unsigned long addr,
void *val, void *val,
unsigned int bytes, unsigned int bytes,
unsigned int *error,
struct kvm_vcpu *vcpu); struct kvm_vcpu *vcpu);
/* /*
...@@ -104,6 +107,7 @@ struct x86_emulate_ops { ...@@ -104,6 +107,7 @@ struct x86_emulate_ops {
int (*write_emulated)(unsigned long addr, int (*write_emulated)(unsigned long addr,
const void *val, const void *val,
unsigned int bytes, unsigned int bytes,
unsigned int *error,
struct kvm_vcpu *vcpu); struct kvm_vcpu *vcpu);
/* /*
...@@ -118,6 +122,7 @@ struct x86_emulate_ops { ...@@ -118,6 +122,7 @@ struct x86_emulate_ops {
const void *old, const void *old,
const void *new, const void *new,
unsigned int bytes, unsigned int bytes,
unsigned int *error,
struct kvm_vcpu *vcpu); struct kvm_vcpu *vcpu);
int (*pio_in_emulated)(int size, unsigned short port, void *val, int (*pio_in_emulated)(int size, unsigned short port, void *val,
...@@ -132,18 +137,26 @@ struct x86_emulate_ops { ...@@ -132,18 +137,26 @@ struct x86_emulate_ops {
int seg, struct kvm_vcpu *vcpu); int seg, struct kvm_vcpu *vcpu);
u16 (*get_segment_selector)(int seg, struct kvm_vcpu *vcpu); u16 (*get_segment_selector)(int seg, struct kvm_vcpu *vcpu);
void (*set_segment_selector)(u16 sel, int seg, struct kvm_vcpu *vcpu); void (*set_segment_selector)(u16 sel, int seg, struct kvm_vcpu *vcpu);
unsigned long (*get_cached_segment_base)(int seg, struct kvm_vcpu *vcpu);
void (*get_gdt)(struct desc_ptr *dt, struct kvm_vcpu *vcpu); void (*get_gdt)(struct desc_ptr *dt, struct kvm_vcpu *vcpu);
ulong (*get_cr)(int cr, struct kvm_vcpu *vcpu); ulong (*get_cr)(int cr, struct kvm_vcpu *vcpu);
void (*set_cr)(int cr, ulong val, struct kvm_vcpu *vcpu); int (*set_cr)(int cr, ulong val, struct kvm_vcpu *vcpu);
int (*cpl)(struct kvm_vcpu *vcpu); int (*cpl)(struct kvm_vcpu *vcpu);
void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags); int (*get_dr)(int dr, unsigned long *dest, struct kvm_vcpu *vcpu);
int (*set_dr)(int dr, unsigned long value, struct kvm_vcpu *vcpu);
int (*set_msr)(struct kvm_vcpu *vcpu, u32 msr_index, u64 data);
int (*get_msr)(struct kvm_vcpu *vcpu, u32 msr_index, u64 *pdata);
}; };
/* Type, address-of, and value of an instruction's operand. */ /* Type, address-of, and value of an instruction's operand. */
struct operand { struct operand {
enum { OP_REG, OP_MEM, OP_IMM, OP_NONE } type; enum { OP_REG, OP_MEM, OP_IMM, OP_NONE } type;
unsigned int bytes; unsigned int bytes;
unsigned long val, orig_val, *ptr; unsigned long orig_val, *ptr;
union {
unsigned long val;
char valptr[sizeof(unsigned long) + 2];
};
}; };
struct fetch_cache { struct fetch_cache {
...@@ -186,6 +199,7 @@ struct decode_cache { ...@@ -186,6 +199,7 @@ struct decode_cache {
unsigned long modrm_val; unsigned long modrm_val;
struct fetch_cache fetch; struct fetch_cache fetch;
struct read_cache io_read; struct read_cache io_read;
struct read_cache mem_read;
}; };
struct x86_emulate_ctxt { struct x86_emulate_ctxt {
...@@ -202,6 +216,12 @@ struct x86_emulate_ctxt { ...@@ -202,6 +216,12 @@ struct x86_emulate_ctxt {
int interruptibility; int interruptibility;
bool restart; /* restart string instruction after writeback */ bool restart; /* restart string instruction after writeback */
int exception; /* exception that happens during emulation or -1 */
u32 error_code; /* error code for exception */
bool error_code_valid;
unsigned long cr2; /* faulted address in case of #PF */
/* decode cache */ /* decode cache */
struct decode_cache decode; struct decode_cache decode;
}; };
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/mmu_notifier.h> #include <linux/mmu_notifier.h>
#include <linux/tracepoint.h> #include <linux/tracepoint.h>
#include <linux/cpumask.h>
#include <linux/kvm.h> #include <linux/kvm.h>
#include <linux/kvm_para.h> #include <linux/kvm_para.h>
...@@ -39,11 +40,14 @@ ...@@ -39,11 +40,14 @@
0xFFFFFF0000000000ULL) 0xFFFFFF0000000000ULL)
#define INVALID_PAGE (~(hpa_t)0) #define INVALID_PAGE (~(hpa_t)0)
#define VALID_PAGE(x) ((x) != INVALID_PAGE)
#define UNMAPPED_GVA (~(gpa_t)0) #define UNMAPPED_GVA (~(gpa_t)0)
/* KVM Hugepage definitions for x86 */ /* KVM Hugepage definitions for x86 */
#define KVM_NR_PAGE_SIZES 3 #define KVM_NR_PAGE_SIZES 3
#define KVM_HPAGE_SHIFT(x) (PAGE_SHIFT + (((x) - 1) * 9)) #define KVM_HPAGE_GFN_SHIFT(x) (((x) - 1) * 9)
#define KVM_HPAGE_SHIFT(x) (PAGE_SHIFT + KVM_HPAGE_GFN_SHIFT(x))
#define KVM_HPAGE_SIZE(x) (1UL << KVM_HPAGE_SHIFT(x)) #define KVM_HPAGE_SIZE(x) (1UL << KVM_HPAGE_SHIFT(x))
#define KVM_HPAGE_MASK(x) (~(KVM_HPAGE_SIZE(x) - 1)) #define KVM_HPAGE_MASK(x) (~(KVM_HPAGE_SIZE(x) - 1))
#define KVM_PAGES_PER_HPAGE(x) (KVM_HPAGE_SIZE(x) / PAGE_SIZE) #define KVM_PAGES_PER_HPAGE(x) (KVM_HPAGE_SIZE(x) / PAGE_SIZE)
...@@ -69,8 +73,6 @@ ...@@ -69,8 +73,6 @@
#define IOPL_SHIFT 12 #define IOPL_SHIFT 12
#define KVM_ALIAS_SLOTS 4
#define KVM_PERMILLE_MMU_PAGES 20 #define KVM_PERMILLE_MMU_PAGES 20
#define KVM_MIN_ALLOC_MMU_PAGES 64 #define KVM_MIN_ALLOC_MMU_PAGES 64
#define KVM_MMU_HASH_SHIFT 10 #define KVM_MMU_HASH_SHIFT 10
...@@ -241,7 +243,7 @@ struct kvm_mmu { ...@@ -241,7 +243,7 @@ struct kvm_mmu {
void (*prefetch_page)(struct kvm_vcpu *vcpu, void (*prefetch_page)(struct kvm_vcpu *vcpu,
struct kvm_mmu_page *page); struct kvm_mmu_page *page);
int (*sync_page)(struct kvm_vcpu *vcpu, int (*sync_page)(struct kvm_vcpu *vcpu,
struct kvm_mmu_page *sp); struct kvm_mmu_page *sp, bool clear_unsync);
void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva); void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva);
hpa_t root_hpa; hpa_t root_hpa;
int root_level; int root_level;
...@@ -301,8 +303,8 @@ struct kvm_vcpu_arch { ...@@ -301,8 +303,8 @@ struct kvm_vcpu_arch {
unsigned long mmu_seq; unsigned long mmu_seq;
} update_pte; } update_pte;
struct i387_fxsave_struct host_fx_image; struct fpu guest_fpu;
struct i387_fxsave_struct guest_fx_image; u64 xcr0;
gva_t mmio_fault_cr2; gva_t mmio_fault_cr2;
struct kvm_pio_request pio; struct kvm_pio_request pio;
...@@ -360,26 +362,11 @@ struct kvm_vcpu_arch { ...@@ -360,26 +362,11 @@ struct kvm_vcpu_arch {
/* fields used by HYPER-V emulation */ /* fields used by HYPER-V emulation */
u64 hv_vapic; u64 hv_vapic;
};
struct kvm_mem_alias {
gfn_t base_gfn;
unsigned long npages;
gfn_t target_gfn;
#define KVM_ALIAS_INVALID 1UL
unsigned long flags;
};
#define KVM_ARCH_HAS_UNALIAS_INSTANTIATION cpumask_var_t wbinvd_dirty_mask;
struct kvm_mem_aliases {
struct kvm_mem_alias aliases[KVM_ALIAS_SLOTS];
int naliases;
}; };
struct kvm_arch { struct kvm_arch {
struct kvm_mem_aliases *aliases;
unsigned int n_free_mmu_pages; unsigned int n_free_mmu_pages;
unsigned int n_requested_mmu_pages; unsigned int n_requested_mmu_pages;
unsigned int n_alloc_mmu_pages; unsigned int n_alloc_mmu_pages;
...@@ -533,6 +520,8 @@ struct kvm_x86_ops { ...@@ -533,6 +520,8 @@ struct kvm_x86_ops {
void (*set_supported_cpuid)(u32 func, struct kvm_cpuid_entry2 *entry); void (*set_supported_cpuid)(u32 func, struct kvm_cpuid_entry2 *entry);
bool (*has_wbinvd_exit)(void);
const struct trace_print_flags *exit_reasons_str; const struct trace_print_flags *exit_reasons_str;
}; };
...@@ -576,7 +565,6 @@ enum emulation_result { ...@@ -576,7 +565,6 @@ enum emulation_result {
#define EMULTYPE_SKIP (1 << 2) #define EMULTYPE_SKIP (1 << 2)
int emulate_instruction(struct kvm_vcpu *vcpu, int emulate_instruction(struct kvm_vcpu *vcpu,
unsigned long cr2, u16 error_code, int emulation_type); unsigned long cr2, u16 error_code, int emulation_type);
void kvm_report_emulation_failure(struct kvm_vcpu *cvpu, const char *context);
void realmode_lgdt(struct kvm_vcpu *vcpu, u16 size, unsigned long address); void realmode_lgdt(struct kvm_vcpu *vcpu, u16 size, unsigned long address);
void realmode_lidt(struct kvm_vcpu *vcpu, u16 size, unsigned long address); void realmode_lidt(struct kvm_vcpu *vcpu, u16 size, unsigned long address);
...@@ -591,10 +579,7 @@ void kvm_emulate_cpuid(struct kvm_vcpu *vcpu); ...@@ -591,10 +579,7 @@ void kvm_emulate_cpuid(struct kvm_vcpu *vcpu);
int kvm_emulate_halt(struct kvm_vcpu *vcpu); int kvm_emulate_halt(struct kvm_vcpu *vcpu);
int emulate_invlpg(struct kvm_vcpu *vcpu, gva_t address); int emulate_invlpg(struct kvm_vcpu *vcpu, gva_t address);
int emulate_clts(struct kvm_vcpu *vcpu); int emulate_clts(struct kvm_vcpu *vcpu);
int emulator_get_dr(struct x86_emulate_ctxt *ctxt, int dr, int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu);
unsigned long *dest);
int emulator_set_dr(struct x86_emulate_ctxt *ctxt, int dr,
unsigned long value);
void kvm_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg); void kvm_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
int kvm_load_segment_descriptor(struct kvm_vcpu *vcpu, u16 selector, int seg); int kvm_load_segment_descriptor(struct kvm_vcpu *vcpu, u16 selector, int seg);
...@@ -602,15 +587,16 @@ int kvm_load_segment_descriptor(struct kvm_vcpu *vcpu, u16 selector, int seg); ...@@ -602,15 +587,16 @@ int kvm_load_segment_descriptor(struct kvm_vcpu *vcpu, u16 selector, int seg);
int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int reason, int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int reason,
bool has_error_code, u32 error_code); bool has_error_code, u32 error_code);
void kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0); int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
void kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3); int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3);
void kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4); int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
void kvm_set_cr8(struct kvm_vcpu *vcpu, unsigned long cr8); void kvm_set_cr8(struct kvm_vcpu *vcpu, unsigned long cr8);
int kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val); int kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val);
int kvm_get_dr(struct kvm_vcpu *vcpu, int dr, unsigned long *val); int kvm_get_dr(struct kvm_vcpu *vcpu, int dr, unsigned long *val);
unsigned long kvm_get_cr8(struct kvm_vcpu *vcpu); unsigned long kvm_get_cr8(struct kvm_vcpu *vcpu);
void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw); void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw);
void kvm_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l); void kvm_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l);
int kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr);
int kvm_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata); int kvm_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata);
int kvm_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data); int kvm_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data);
...@@ -630,12 +616,7 @@ int kvm_pic_set_irq(void *opaque, int irq, int level); ...@@ -630,12 +616,7 @@ int kvm_pic_set_irq(void *opaque, int irq, int level);
void kvm_inject_nmi(struct kvm_vcpu *vcpu); void kvm_inject_nmi(struct kvm_vcpu *vcpu);
void fx_init(struct kvm_vcpu *vcpu); int fx_init(struct kvm_vcpu *vcpu);
int emulator_write_emulated(unsigned long addr,
const void *val,
unsigned int bytes,
struct kvm_vcpu *vcpu);
void kvm_mmu_flush_tlb(struct kvm_vcpu *vcpu); void kvm_mmu_flush_tlb(struct kvm_vcpu *vcpu);
void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
...@@ -664,8 +645,6 @@ void kvm_disable_tdp(void); ...@@ -664,8 +645,6 @@ void kvm_disable_tdp(void);
int complete_pio(struct kvm_vcpu *vcpu); int complete_pio(struct kvm_vcpu *vcpu);
bool kvm_check_iopl(struct kvm_vcpu *vcpu); bool kvm_check_iopl(struct kvm_vcpu *vcpu);
struct kvm_memory_slot *gfn_to_memslot_unaliased(struct kvm *kvm, gfn_t gfn);
static inline struct kvm_mmu_page *page_header(hpa_t shadow_page) static inline struct kvm_mmu_page *page_header(hpa_t shadow_page)
{ {
struct page *page = pfn_to_page(shadow_page >> PAGE_SHIFT); struct page *page = pfn_to_page(shadow_page >> PAGE_SHIFT);
...@@ -719,21 +698,6 @@ static inline unsigned long read_msr(unsigned long msr) ...@@ -719,21 +698,6 @@ static inline unsigned long read_msr(unsigned long msr)
} }
#endif #endif
static inline void kvm_fx_save(struct i387_fxsave_struct *image)
{
asm("fxsave (%0)":: "r" (image));
}
static inline void kvm_fx_restore(struct i387_fxsave_struct *image)
{
asm("fxrstor (%0)":: "r" (image));
}
static inline void kvm_fx_finit(void)
{
asm("finit");
}
static inline u32 get_rdx_init_val(void) static inline u32 get_rdx_init_val(void)
{ {
return 0x600; /* P6 family */ return 0x600; /* P6 family */
......
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#define _EFER_LMA 10 /* Long mode active (read-only) */ #define _EFER_LMA 10 /* Long mode active (read-only) */
#define _EFER_NX 11 /* No execute enable */ #define _EFER_NX 11 /* No execute enable */
#define _EFER_SVME 12 /* Enable virtualization */ #define _EFER_SVME 12 /* Enable virtualization */
#define _EFER_LMSLE 13 /* Long Mode Segment Limit Enable */
#define _EFER_FFXSR 14 /* Enable Fast FXSAVE/FXRSTOR */ #define _EFER_FFXSR 14 /* Enable Fast FXSAVE/FXRSTOR */
#define EFER_SCE (1<<_EFER_SCE) #define EFER_SCE (1<<_EFER_SCE)
...@@ -27,6 +28,7 @@ ...@@ -27,6 +28,7 @@
#define EFER_LMA (1<<_EFER_LMA) #define EFER_LMA (1<<_EFER_LMA)
#define EFER_NX (1<<_EFER_NX) #define EFER_NX (1<<_EFER_NX)
#define EFER_SVME (1<<_EFER_SVME) #define EFER_SVME (1<<_EFER_SVME)
#define EFER_LMSLE (1<<_EFER_LMSLE)
#define EFER_FFXSR (1<<_EFER_FFXSR) #define EFER_FFXSR (1<<_EFER_FFXSR)
/* Intel MSRs. Some also available on other CPUs */ /* Intel MSRs. Some also available on other CPUs */
......
...@@ -257,6 +257,7 @@ enum vmcs_field { ...@@ -257,6 +257,7 @@ enum vmcs_field {
#define EXIT_REASON_IO_INSTRUCTION 30 #define EXIT_REASON_IO_INSTRUCTION 30
#define EXIT_REASON_MSR_READ 31 #define EXIT_REASON_MSR_READ 31
#define EXIT_REASON_MSR_WRITE 32 #define EXIT_REASON_MSR_WRITE 32
#define EXIT_REASON_INVALID_STATE 33
#define EXIT_REASON_MWAIT_INSTRUCTION 36 #define EXIT_REASON_MWAIT_INSTRUCTION 36
#define EXIT_REASON_MONITOR_INSTRUCTION 39 #define EXIT_REASON_MONITOR_INSTRUCTION 39
#define EXIT_REASON_PAUSE_INSTRUCTION 40 #define EXIT_REASON_PAUSE_INSTRUCTION 40
...@@ -266,6 +267,7 @@ enum vmcs_field { ...@@ -266,6 +267,7 @@ enum vmcs_field {
#define EXIT_REASON_EPT_VIOLATION 48 #define EXIT_REASON_EPT_VIOLATION 48
#define EXIT_REASON_EPT_MISCONFIG 49 #define EXIT_REASON_EPT_MISCONFIG 49
#define EXIT_REASON_WBINVD 54 #define EXIT_REASON_WBINVD 54
#define EXIT_REASON_XSETBV 55
/* /*
* Interruption-information format * Interruption-information format
...@@ -375,6 +377,9 @@ enum vmcs_field { ...@@ -375,6 +377,9 @@ enum vmcs_field {
#define VMX_EPT_EXTENT_CONTEXT_BIT (1ull << 25) #define VMX_EPT_EXTENT_CONTEXT_BIT (1ull << 25)
#define VMX_EPT_EXTENT_GLOBAL_BIT (1ull << 26) #define VMX_EPT_EXTENT_GLOBAL_BIT (1ull << 26)
#define VMX_VPID_EXTENT_SINGLE_CONTEXT_BIT (1ull << 9) /* (41 - 32) */
#define VMX_VPID_EXTENT_GLOBAL_CONTEXT_BIT (1ull << 10) /* (42 - 32) */
#define VMX_EPT_DEFAULT_GAW 3 #define VMX_EPT_DEFAULT_GAW 3
#define VMX_EPT_MAX_GAW 0x4 #define VMX_EPT_MAX_GAW 0x4
#define VMX_EPT_MT_EPTE_SHIFT 3 #define VMX_EPT_MT_EPTE_SHIFT 3
......
...@@ -13,6 +13,12 @@ ...@@ -13,6 +13,12 @@
#define FXSAVE_SIZE 512 #define FXSAVE_SIZE 512
#define XSAVE_HDR_SIZE 64
#define XSAVE_HDR_OFFSET FXSAVE_SIZE
#define XSAVE_YMM_SIZE 256
#define XSAVE_YMM_OFFSET (XSAVE_HDR_SIZE + XSAVE_HDR_OFFSET)
/* /*
* These are the features that the OS can handle currently. * These are the features that the OS can handle currently.
*/ */
......
...@@ -107,7 +107,7 @@ void __cpuinit fpu_init(void) ...@@ -107,7 +107,7 @@ void __cpuinit fpu_init(void)
} }
#endif /* CONFIG_X86_64 */ #endif /* CONFIG_X86_64 */
static void fpu_finit(struct fpu *fpu) void fpu_finit(struct fpu *fpu)
{ {
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
if (!HAVE_HWFP) { if (!HAVE_HWFP) {
...@@ -132,6 +132,7 @@ static void fpu_finit(struct fpu *fpu) ...@@ -132,6 +132,7 @@ static void fpu_finit(struct fpu *fpu)
fp->fos = 0xffff0000u; fp->fos = 0xffff0000u;
} }
} }
EXPORT_SYMBOL_GPL(fpu_finit);
/* /*
* The _current_ task is using the FPU for the first time * The _current_ task is using the FPU for the first time
......
...@@ -28,6 +28,7 @@ unsigned long idle_nomwait; ...@@ -28,6 +28,7 @@ unsigned long idle_nomwait;
EXPORT_SYMBOL(idle_nomwait); EXPORT_SYMBOL(idle_nomwait);
struct kmem_cache *task_xstate_cachep; struct kmem_cache *task_xstate_cachep;
EXPORT_SYMBOL_GPL(task_xstate_cachep);
int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
{ {
......
This diff is collapsed.
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
* Copyright (c) 2006 Intel Corporation * Copyright (c) 2006 Intel Corporation
* Copyright (c) 2007 Keir Fraser, XenSource Inc * Copyright (c) 2007 Keir Fraser, XenSource Inc
* Copyright (c) 2008 Intel Corporation * Copyright (c) 2008 Intel Corporation
* Copyright 2009 Red Hat, Inc. and/or its affilates.
* *
* Permission is hereby granted, free of charge, to any person obtaining a copy * Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal * of this software and associated documentation files (the "Software"), to deal
...@@ -33,6 +34,7 @@ ...@@ -33,6 +34,7 @@
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/workqueue.h>
#include "irq.h" #include "irq.h"
#include "i8254.h" #include "i8254.h"
...@@ -243,11 +245,22 @@ static void kvm_pit_ack_irq(struct kvm_irq_ack_notifier *kian) ...@@ -243,11 +245,22 @@ static void kvm_pit_ack_irq(struct kvm_irq_ack_notifier *kian)
{ {
struct kvm_kpit_state *ps = container_of(kian, struct kvm_kpit_state, struct kvm_kpit_state *ps = container_of(kian, struct kvm_kpit_state,
irq_ack_notifier); irq_ack_notifier);
raw_spin_lock(&ps->inject_lock); int value;
if (atomic_dec_return(&ps->pit_timer.pending) < 0)
spin_lock(&ps->inject_lock);
value = atomic_dec_return(&ps->pit_timer.pending);
if (value < 0)
/* spurious acks can be generated if, for example, the
* PIC is being reset. Handle it gracefully here
*/
atomic_inc(&ps->pit_timer.pending); atomic_inc(&ps->pit_timer.pending);
else if (value > 0)
/* in this case, we had multiple outstanding pit interrupts
* that we needed to inject. Reinject
*/
queue_work(ps->pit->wq, &ps->pit->expired);
ps->irq_ack = 1; ps->irq_ack = 1;
raw_spin_unlock(&ps->inject_lock); spin_unlock(&ps->inject_lock);
} }
void __kvm_migrate_pit_timer(struct kvm_vcpu *vcpu) void __kvm_migrate_pit_timer(struct kvm_vcpu *vcpu)
...@@ -263,10 +276,10 @@ void __kvm_migrate_pit_timer(struct kvm_vcpu *vcpu) ...@@ -263,10 +276,10 @@ void __kvm_migrate_pit_timer(struct kvm_vcpu *vcpu)
hrtimer_start_expires(timer, HRTIMER_MODE_ABS); hrtimer_start_expires(timer, HRTIMER_MODE_ABS);
} }
static void destroy_pit_timer(struct kvm_timer *pt) static void destroy_pit_timer(struct kvm_pit *pit)
{ {
pr_debug("execute del timer!\n"); hrtimer_cancel(&pit->pit_state.pit_timer.timer);
hrtimer_cancel(&pt->timer); cancel_work_sync(&pit->expired);
} }
static bool kpit_is_periodic(struct kvm_timer *ktimer) static bool kpit_is_periodic(struct kvm_timer *ktimer)
...@@ -280,6 +293,60 @@ static struct kvm_timer_ops kpit_ops = { ...@@ -280,6 +293,60 @@ static struct kvm_timer_ops kpit_ops = {
.is_periodic = kpit_is_periodic, .is_periodic = kpit_is_periodic,
}; };
static void pit_do_work(struct work_struct *work)
{
struct kvm_pit *pit = container_of(work, struct kvm_pit, expired);
struct kvm *kvm = pit->kvm;
struct kvm_vcpu *vcpu;
int i;
struct kvm_kpit_state *ps = &pit->pit_state;
int inject = 0;
/* Try to inject pending interrupts when
* last one has been acked.
*/
spin_lock(&ps->inject_lock);
if (ps->irq_ack) {
ps->irq_ack = 0;
inject = 1;
}
spin_unlock(&ps->inject_lock);
if (inject) {
kvm_set_irq(kvm, kvm->arch.vpit->irq_source_id, 0, 1);
kvm_set_irq(kvm, kvm->arch.vpit->irq_source_id, 0, 0);
/*
* Provides NMI watchdog support via Virtual Wire mode.
* The route is: PIT -> PIC -> LVT0 in NMI mode.
*
* Note: Our Virtual Wire implementation is simplified, only
* propagating PIT interrupts to all VCPUs when they have set
* LVT0 to NMI delivery. Other PIC interrupts are just sent to
* VCPU0, and only if its LVT0 is in EXTINT mode.
*/
if (kvm->arch.vapics_in_nmi_mode > 0)
kvm_for_each_vcpu(i, vcpu, kvm)
kvm_apic_nmi_wd_deliver(vcpu);
}
}
static enum hrtimer_restart pit_timer_fn(struct hrtimer *data)
{
struct kvm_timer *ktimer = container_of(data, struct kvm_timer, timer);
struct kvm_pit *pt = ktimer->kvm->arch.vpit;
if (ktimer->reinject || !atomic_read(&ktimer->pending)) {
atomic_inc(&ktimer->pending);
queue_work(pt->wq, &pt->expired);
}
if (ktimer->t_ops->is_periodic(ktimer)) {
hrtimer_add_expires_ns(&ktimer->timer, ktimer->period);
return HRTIMER_RESTART;
} else
return HRTIMER_NORESTART;
}
static void create_pit_timer(struct kvm_kpit_state *ps, u32 val, int is_period) static void create_pit_timer(struct kvm_kpit_state *ps, u32 val, int is_period)
{ {
struct kvm_timer *pt = &ps->pit_timer; struct kvm_timer *pt = &ps->pit_timer;
...@@ -291,13 +358,13 @@ static void create_pit_timer(struct kvm_kpit_state *ps, u32 val, int is_period) ...@@ -291,13 +358,13 @@ static void create_pit_timer(struct kvm_kpit_state *ps, u32 val, int is_period)
/* TODO The new value only affected after the retriggered */ /* TODO The new value only affected after the retriggered */
hrtimer_cancel(&pt->timer); hrtimer_cancel(&pt->timer);
cancel_work_sync(&ps->pit->expired);
pt->period = interval; pt->period = interval;
ps->is_periodic = is_period; ps->is_periodic = is_period;
pt->timer.function = kvm_timer_fn; pt->timer.function = pit_timer_fn;
pt->t_ops = &kpit_ops; pt->t_ops = &kpit_ops;
pt->kvm = ps->pit->kvm; pt->kvm = ps->pit->kvm;
pt->vcpu = pt->kvm->bsp_vcpu;
atomic_set(&pt->pending, 0); atomic_set(&pt->pending, 0);
ps->irq_ack = 1; ps->irq_ack = 1;
...@@ -346,7 +413,7 @@ static void pit_load_count(struct kvm *kvm, int channel, u32 val) ...@@ -346,7 +413,7 @@ static void pit_load_count(struct kvm *kvm, int channel, u32 val)
} }
break; break;
default: default:
destroy_pit_timer(&ps->pit_timer); destroy_pit_timer(kvm->arch.vpit);
} }
} }
...@@ -625,7 +692,15 @@ struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags) ...@@ -625,7 +692,15 @@ struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags)
mutex_init(&pit->pit_state.lock); mutex_init(&pit->pit_state.lock);
mutex_lock(&pit->pit_state.lock); mutex_lock(&pit->pit_state.lock);
raw_spin_lock_init(&pit->pit_state.inject_lock); spin_lock_init(&pit->pit_state.inject_lock);
pit->wq = create_singlethread_workqueue("kvm-pit-wq");
if (!pit->wq) {
mutex_unlock(&pit->pit_state.lock);
kfree(pit);
return NULL;
}
INIT_WORK(&pit->expired, pit_do_work);
kvm->arch.vpit = pit; kvm->arch.vpit = pit;
pit->kvm = kvm; pit->kvm = kvm;
...@@ -677,6 +752,9 @@ void kvm_free_pit(struct kvm *kvm) ...@@ -677,6 +752,9 @@ void kvm_free_pit(struct kvm *kvm)
struct hrtimer *timer; struct hrtimer *timer;
if (kvm->arch.vpit) { if (kvm->arch.vpit) {
kvm_io_bus_unregister_dev(kvm, KVM_PIO_BUS, &kvm->arch.vpit->dev);
kvm_io_bus_unregister_dev(kvm, KVM_PIO_BUS,
&kvm->arch.vpit->speaker_dev);
kvm_unregister_irq_mask_notifier(kvm, 0, kvm_unregister_irq_mask_notifier(kvm, 0,
&kvm->arch.vpit->mask_notifier); &kvm->arch.vpit->mask_notifier);
kvm_unregister_irq_ack_notifier(kvm, kvm_unregister_irq_ack_notifier(kvm,
...@@ -684,54 +762,10 @@ void kvm_free_pit(struct kvm *kvm) ...@@ -684,54 +762,10 @@ void kvm_free_pit(struct kvm *kvm)
mutex_lock(&kvm->arch.vpit->pit_state.lock); mutex_lock(&kvm->arch.vpit->pit_state.lock);
timer = &kvm->arch.vpit->pit_state.pit_timer.timer; timer = &kvm->arch.vpit->pit_state.pit_timer.timer;
hrtimer_cancel(timer); hrtimer_cancel(timer);
cancel_work_sync(&kvm->arch.vpit->expired);
kvm_free_irq_source_id(kvm, kvm->arch.vpit->irq_source_id); kvm_free_irq_source_id(kvm, kvm->arch.vpit->irq_source_id);
mutex_unlock(&kvm->arch.vpit->pit_state.lock); mutex_unlock(&kvm->arch.vpit->pit_state.lock);
destroy_workqueue(kvm->arch.vpit->wq);
kfree(kvm->arch.vpit); kfree(kvm->arch.vpit);
} }
} }
static void __inject_pit_timer_intr(struct kvm *kvm)
{
struct kvm_vcpu *vcpu;
int i;
kvm_set_irq(kvm, kvm->arch.vpit->irq_source_id, 0, 1);
kvm_set_irq(kvm, kvm->arch.vpit->irq_source_id, 0, 0);
/*
* Provides NMI watchdog support via Virtual Wire mode.
* The route is: PIT -> PIC -> LVT0 in NMI mode.
*
* Note: Our Virtual Wire implementation is simplified, only
* propagating PIT interrupts to all VCPUs when they have set
* LVT0 to NMI delivery. Other PIC interrupts are just sent to
* VCPU0, and only if its LVT0 is in EXTINT mode.
*/
if (kvm->arch.vapics_in_nmi_mode > 0)
kvm_for_each_vcpu(i, vcpu, kvm)
kvm_apic_nmi_wd_deliver(vcpu);
}
void kvm_inject_pit_timer_irqs(struct kvm_vcpu *vcpu)
{
struct kvm_pit *pit = vcpu->kvm->arch.vpit;
struct kvm *kvm = vcpu->kvm;
struct kvm_kpit_state *ps;
if (pit) {
int inject = 0;
ps = &pit->pit_state;
/* Try to inject pending interrupts when
* last one has been acked.
*/
raw_spin_lock(&ps->inject_lock);
if (atomic_read(&ps->pit_timer.pending) && ps->irq_ack) {
ps->irq_ack = 0;
inject = 1;
}
raw_spin_unlock(&ps->inject_lock);
if (inject)
__inject_pit_timer_intr(kvm);
}
}
...@@ -27,7 +27,7 @@ struct kvm_kpit_state { ...@@ -27,7 +27,7 @@ struct kvm_kpit_state {
u32 speaker_data_on; u32 speaker_data_on;
struct mutex lock; struct mutex lock;
struct kvm_pit *pit; struct kvm_pit *pit;
raw_spinlock_t inject_lock; spinlock_t inject_lock;
unsigned long irq_ack; unsigned long irq_ack;
struct kvm_irq_ack_notifier irq_ack_notifier; struct kvm_irq_ack_notifier irq_ack_notifier;
}; };
...@@ -40,6 +40,8 @@ struct kvm_pit { ...@@ -40,6 +40,8 @@ struct kvm_pit {
struct kvm_kpit_state pit_state; struct kvm_kpit_state pit_state;
int irq_source_id; int irq_source_id;
struct kvm_irq_mask_notifier mask_notifier; struct kvm_irq_mask_notifier mask_notifier;
struct workqueue_struct *wq;
struct work_struct expired;
}; };
#define KVM_PIT_BASE_ADDRESS 0x40 #define KVM_PIT_BASE_ADDRESS 0x40
......
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
* *
* Copyright (c) 2003-2004 Fabrice Bellard * Copyright (c) 2003-2004 Fabrice Bellard
* Copyright (c) 2007 Intel Corporation * Copyright (c) 2007 Intel Corporation
* Copyright 2009 Red Hat, Inc. and/or its affilates.
* *
* Permission is hereby granted, free of charge, to any person obtaining a copy * Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal * of this software and associated documentation files (the "Software"), to deal
...@@ -33,6 +34,8 @@ ...@@ -33,6 +34,8 @@
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include "trace.h" #include "trace.h"
static void pic_irq_request(struct kvm *kvm, int level);
static void pic_lock(struct kvm_pic *s) static void pic_lock(struct kvm_pic *s)
__acquires(&s->lock) __acquires(&s->lock)
{ {
...@@ -43,16 +46,25 @@ static void pic_unlock(struct kvm_pic *s) ...@@ -43,16 +46,25 @@ static void pic_unlock(struct kvm_pic *s)
__releases(&s->lock) __releases(&s->lock)
{ {
bool wakeup = s->wakeup_needed; bool wakeup = s->wakeup_needed;
struct kvm_vcpu *vcpu; struct kvm_vcpu *vcpu, *found = NULL;
int i;
s->wakeup_needed = false; s->wakeup_needed = false;
raw_spin_unlock(&s->lock); raw_spin_unlock(&s->lock);
if (wakeup) { if (wakeup) {
vcpu = s->kvm->bsp_vcpu; kvm_for_each_vcpu(i, vcpu, s->kvm) {
if (vcpu) if (kvm_apic_accept_pic_intr(vcpu)) {
kvm_vcpu_kick(vcpu); found = vcpu;
break;
}
}
if (!found)
found = s->kvm->bsp_vcpu;
kvm_vcpu_kick(found);
} }
} }
...@@ -173,10 +185,7 @@ static void pic_update_irq(struct kvm_pic *s) ...@@ -173,10 +185,7 @@ static void pic_update_irq(struct kvm_pic *s)
pic_set_irq1(&s->pics[0], 2, 0); pic_set_irq1(&s->pics[0], 2, 0);
} }
irq = pic_get_irq(&s->pics[0]); irq = pic_get_irq(&s->pics[0]);
if (irq >= 0) pic_irq_request(s->kvm, irq >= 0);
s->irq_request(s->irq_request_opaque, 1);
else
s->irq_request(s->irq_request_opaque, 0);
} }
void kvm_pic_update_irq(struct kvm_pic *s) void kvm_pic_update_irq(struct kvm_pic *s)
...@@ -261,8 +270,7 @@ int kvm_pic_read_irq(struct kvm *kvm) ...@@ -261,8 +270,7 @@ int kvm_pic_read_irq(struct kvm *kvm)
void kvm_pic_reset(struct kvm_kpic_state *s) void kvm_pic_reset(struct kvm_kpic_state *s)
{ {
int irq; int irq;
struct kvm *kvm = s->pics_state->irq_request_opaque; struct kvm_vcpu *vcpu0 = s->pics_state->kvm->bsp_vcpu;
struct kvm_vcpu *vcpu0 = kvm->bsp_vcpu;
u8 irr = s->irr, isr = s->imr; u8 irr = s->irr, isr = s->imr;
s->last_irr = 0; s->last_irr = 0;
...@@ -301,8 +309,7 @@ static void pic_ioport_write(void *opaque, u32 addr, u32 val) ...@@ -301,8 +309,7 @@ static void pic_ioport_write(void *opaque, u32 addr, u32 val)
/* /*
* deassert a pending interrupt * deassert a pending interrupt
*/ */
s->pics_state->irq_request(s->pics_state-> pic_irq_request(s->pics_state->kvm, 0);
irq_request_opaque, 0);
s->init_state = 1; s->init_state = 1;
s->init4 = val & 1; s->init4 = val & 1;
if (val & 0x02) if (val & 0x02)
...@@ -356,10 +363,20 @@ static void pic_ioport_write(void *opaque, u32 addr, u32 val) ...@@ -356,10 +363,20 @@ static void pic_ioport_write(void *opaque, u32 addr, u32 val)
} }
} else } else
switch (s->init_state) { switch (s->init_state) {
case 0: /* normal mode */ case 0: { /* normal mode */
u8 imr_diff = s->imr ^ val,
off = (s == &s->pics_state->pics[0]) ? 0 : 8;
s->imr = val; s->imr = val;
for (irq = 0; irq < PIC_NUM_PINS/2; irq++)
if (imr_diff & (1 << irq))
kvm_fire_mask_notifiers(
s->pics_state->kvm,
SELECT_PIC(irq + off),
irq + off,
!!(s->imr & (1 << irq)));
pic_update_irq(s->pics_state); pic_update_irq(s->pics_state);
break; break;
}
case 1: case 1:
s->irq_base = val & 0xf8; s->irq_base = val & 0xf8;
s->init_state = 2; s->init_state = 2;
...@@ -518,9 +535,8 @@ static int picdev_read(struct kvm_io_device *this, ...@@ -518,9 +535,8 @@ static int picdev_read(struct kvm_io_device *this,
/* /*
* callback when PIC0 irq status changed * callback when PIC0 irq status changed
*/ */
static void pic_irq_request(void *opaque, int level) static void pic_irq_request(struct kvm *kvm, int level)
{ {
struct kvm *kvm = opaque;
struct kvm_vcpu *vcpu = kvm->bsp_vcpu; struct kvm_vcpu *vcpu = kvm->bsp_vcpu;
struct kvm_pic *s = pic_irqchip(kvm); struct kvm_pic *s = pic_irqchip(kvm);
int irq = pic_get_irq(&s->pics[0]); int irq = pic_get_irq(&s->pics[0]);
...@@ -549,8 +565,6 @@ struct kvm_pic *kvm_create_pic(struct kvm *kvm) ...@@ -549,8 +565,6 @@ struct kvm_pic *kvm_create_pic(struct kvm *kvm)
s->kvm = kvm; s->kvm = kvm;
s->pics[0].elcr_mask = 0xf8; s->pics[0].elcr_mask = 0xf8;
s->pics[1].elcr_mask = 0xde; s->pics[1].elcr_mask = 0xde;
s->irq_request = pic_irq_request;
s->irq_request_opaque = kvm;
s->pics[0].pics_state = s; s->pics[0].pics_state = s;
s->pics[1].pics_state = s; s->pics[1].pics_state = s;
......
/* /*
* irq.c: API for in kernel interrupt controller * irq.c: API for in kernel interrupt controller
* Copyright (c) 2007, Intel Corporation. * Copyright (c) 2007, Intel Corporation.
* Copyright 2009 Red Hat, Inc. and/or its affilates.
* *
* This program is free software; you can redistribute it and/or modify it * This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License, * under the terms and conditions of the GNU General Public License,
...@@ -89,7 +90,6 @@ EXPORT_SYMBOL_GPL(kvm_cpu_get_interrupt); ...@@ -89,7 +90,6 @@ EXPORT_SYMBOL_GPL(kvm_cpu_get_interrupt);
void kvm_inject_pending_timer_irqs(struct kvm_vcpu *vcpu) void kvm_inject_pending_timer_irqs(struct kvm_vcpu *vcpu)
{ {
kvm_inject_apic_timer_irqs(vcpu); kvm_inject_apic_timer_irqs(vcpu);
kvm_inject_pit_timer_irqs(vcpu);
/* TODO: PIT, RTC etc. */ /* TODO: PIT, RTC etc. */
} }
EXPORT_SYMBOL_GPL(kvm_inject_pending_timer_irqs); EXPORT_SYMBOL_GPL(kvm_inject_pending_timer_irqs);
......
...@@ -38,8 +38,6 @@ ...@@ -38,8 +38,6 @@
struct kvm; struct kvm;
struct kvm_vcpu; struct kvm_vcpu;
typedef void irq_request_func(void *opaque, int level);
struct kvm_kpic_state { struct kvm_kpic_state {
u8 last_irr; /* edge detection */ u8 last_irr; /* edge detection */
u8 irr; /* interrupt request register */ u8 irr; /* interrupt request register */
...@@ -67,8 +65,6 @@ struct kvm_pic { ...@@ -67,8 +65,6 @@ struct kvm_pic {
unsigned pending_acks; unsigned pending_acks;
struct kvm *kvm; struct kvm *kvm;
struct kvm_kpic_state pics[2]; /* 0 is master pic, 1 is slave pic */ struct kvm_kpic_state pics[2]; /* 0 is master pic, 1 is slave pic */
irq_request_func *irq_request;
void *irq_request_opaque;
int output; /* intr from master PIC */ int output; /* intr from master PIC */
struct kvm_io_device dev; struct kvm_io_device dev;
void (*ack_notifier)(void *opaque, int irq); void (*ack_notifier)(void *opaque, int irq);
......
...@@ -36,6 +36,8 @@ static inline void kvm_rip_write(struct kvm_vcpu *vcpu, unsigned long val) ...@@ -36,6 +36,8 @@ static inline void kvm_rip_write(struct kvm_vcpu *vcpu, unsigned long val)
static inline u64 kvm_pdptr_read(struct kvm_vcpu *vcpu, int index) static inline u64 kvm_pdptr_read(struct kvm_vcpu *vcpu, int index)
{ {
might_sleep(); /* on svm */
if (!test_bit(VCPU_EXREG_PDPTR, if (!test_bit(VCPU_EXREG_PDPTR,
(unsigned long *)&vcpu->arch.regs_avail)) (unsigned long *)&vcpu->arch.regs_avail))
kvm_x86_ops->cache_reg(vcpu, VCPU_EXREG_PDPTR); kvm_x86_ops->cache_reg(vcpu, VCPU_EXREG_PDPTR);
...@@ -69,4 +71,10 @@ static inline ulong kvm_read_cr4(struct kvm_vcpu *vcpu) ...@@ -69,4 +71,10 @@ static inline ulong kvm_read_cr4(struct kvm_vcpu *vcpu)
return kvm_read_cr4_bits(vcpu, ~0UL); return kvm_read_cr4_bits(vcpu, ~0UL);
} }
static inline u64 kvm_read_edx_eax(struct kvm_vcpu *vcpu)
{
return (kvm_register_read(vcpu, VCPU_REGS_RAX) & -1u)
| ((u64)(kvm_register_read(vcpu, VCPU_REGS_RDX) & -1u) << 32);
}
#endif #endif
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
* Copyright (C) 2006 Qumranet, Inc. * Copyright (C) 2006 Qumranet, Inc.
* Copyright (C) 2007 Novell * Copyright (C) 2007 Novell
* Copyright (C) 2007 Intel * Copyright (C) 2007 Intel
* Copyright 2009 Red Hat, Inc. and/or its affilates.
* *
* Authors: * Authors:
* Dor Laor <dor.laor@qumranet.com> * Dor Laor <dor.laor@qumranet.com>
...@@ -328,7 +329,7 @@ int kvm_apic_match_dest(struct kvm_vcpu *vcpu, struct kvm_lapic *source, ...@@ -328,7 +329,7 @@ int kvm_apic_match_dest(struct kvm_vcpu *vcpu, struct kvm_lapic *source,
"dest_mode 0x%x, short_hand 0x%x\n", "dest_mode 0x%x, short_hand 0x%x\n",
target, source, dest, dest_mode, short_hand); target, source, dest, dest_mode, short_hand);
ASSERT(!target); ASSERT(target);
switch (short_hand) { switch (short_hand) {
case APIC_DEST_NOSHORT: case APIC_DEST_NOSHORT:
if (dest_mode == 0) if (dest_mode == 0)
...@@ -533,7 +534,7 @@ static void __report_tpr_access(struct kvm_lapic *apic, bool write) ...@@ -533,7 +534,7 @@ static void __report_tpr_access(struct kvm_lapic *apic, bool write)
struct kvm_vcpu *vcpu = apic->vcpu; struct kvm_vcpu *vcpu = apic->vcpu;
struct kvm_run *run = vcpu->run; struct kvm_run *run = vcpu->run;
set_bit(KVM_REQ_REPORT_TPR_ACCESS, &vcpu->requests); kvm_make_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu);
run->tpr_access.rip = kvm_rip_read(vcpu); run->tpr_access.rip = kvm_rip_read(vcpu);
run->tpr_access.is_write = write; run->tpr_access.is_write = write;
} }
...@@ -1106,13 +1107,11 @@ int kvm_apic_accept_pic_intr(struct kvm_vcpu *vcpu) ...@@ -1106,13 +1107,11 @@ int kvm_apic_accept_pic_intr(struct kvm_vcpu *vcpu)
u32 lvt0 = apic_get_reg(vcpu->arch.apic, APIC_LVT0); u32 lvt0 = apic_get_reg(vcpu->arch.apic, APIC_LVT0);
int r = 0; int r = 0;
if (kvm_vcpu_is_bsp(vcpu)) {
if (!apic_hw_enabled(vcpu->arch.apic)) if (!apic_hw_enabled(vcpu->arch.apic))
r = 1; r = 1;
if ((lvt0 & APIC_LVT_MASKED) == 0 && if ((lvt0 & APIC_LVT_MASKED) == 0 &&
GET_APIC_DELIVERY_MODE(lvt0) == APIC_MODE_EXTINT) GET_APIC_DELIVERY_MODE(lvt0) == APIC_MODE_EXTINT)
r = 1; r = 1;
}
return r; return r;
} }
......
This diff is collapsed.
...@@ -190,7 +190,7 @@ DEFINE_EVENT(kvm_mmu_page_class, kvm_mmu_unsync_page, ...@@ -190,7 +190,7 @@ DEFINE_EVENT(kvm_mmu_page_class, kvm_mmu_unsync_page,
TP_ARGS(sp) TP_ARGS(sp)
); );
DEFINE_EVENT(kvm_mmu_page_class, kvm_mmu_zap_page, DEFINE_EVENT(kvm_mmu_page_class, kvm_mmu_prepare_zap_page,
TP_PROTO(struct kvm_mmu_page *sp), TP_PROTO(struct kvm_mmu_page *sp),
TP_ARGS(sp) TP_ARGS(sp)
......
This diff is collapsed.
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
* AMD SVM support * AMD SVM support
* *
* Copyright (C) 2006 Qumranet, Inc. * Copyright (C) 2006 Qumranet, Inc.
* Copyright 2010 Red Hat, Inc. and/or its affilates.
* *
* Authors: * Authors:
* Yaniv Kamay <yaniv@qumranet.com> * Yaniv Kamay <yaniv@qumranet.com>
...@@ -285,11 +286,11 @@ static inline void flush_guest_tlb(struct kvm_vcpu *vcpu) ...@@ -285,11 +286,11 @@ static inline void flush_guest_tlb(struct kvm_vcpu *vcpu)
static void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) static void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
{ {
vcpu->arch.efer = efer;
if (!npt_enabled && !(efer & EFER_LMA)) if (!npt_enabled && !(efer & EFER_LMA))
efer &= ~EFER_LME; efer &= ~EFER_LME;
to_svm(vcpu)->vmcb->save.efer = efer | EFER_SVME; to_svm(vcpu)->vmcb->save.efer = efer | EFER_SVME;
vcpu->arch.efer = efer;
} }
static int is_external_interrupt(u32 info) static int is_external_interrupt(u32 info)
...@@ -640,7 +641,7 @@ static __init int svm_hardware_setup(void) ...@@ -640,7 +641,7 @@ static __init int svm_hardware_setup(void)
if (nested) { if (nested) {
printk(KERN_INFO "kvm: Nested Virtualization enabled\n"); printk(KERN_INFO "kvm: Nested Virtualization enabled\n");
kvm_enable_efer_bits(EFER_SVME); kvm_enable_efer_bits(EFER_SVME | EFER_LMSLE);
} }
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
...@@ -806,7 +807,7 @@ static void init_vmcb(struct vcpu_svm *svm) ...@@ -806,7 +807,7 @@ static void init_vmcb(struct vcpu_svm *svm)
* svm_set_cr0() sets PG and WP and clears NW and CD on save->cr0. * svm_set_cr0() sets PG and WP and clears NW and CD on save->cr0.
*/ */
svm->vcpu.arch.cr0 = X86_CR0_NW | X86_CR0_CD | X86_CR0_ET; svm->vcpu.arch.cr0 = X86_CR0_NW | X86_CR0_CD | X86_CR0_ET;
kvm_set_cr0(&svm->vcpu, svm->vcpu.arch.cr0); (void)kvm_set_cr0(&svm->vcpu, svm->vcpu.arch.cr0);
save->cr4 = X86_CR4_PAE; save->cr4 = X86_CR4_PAE;
/* rdx = ?? */ /* rdx = ?? */
...@@ -903,13 +904,18 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) ...@@ -903,13 +904,18 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
svm->asid_generation = 0; svm->asid_generation = 0;
init_vmcb(svm); init_vmcb(svm);
fx_init(&svm->vcpu); err = fx_init(&svm->vcpu);
if (err)
goto free_page4;
svm->vcpu.arch.apic_base = 0xfee00000 | MSR_IA32_APICBASE_ENABLE; svm->vcpu.arch.apic_base = 0xfee00000 | MSR_IA32_APICBASE_ENABLE;
if (kvm_vcpu_is_bsp(&svm->vcpu)) if (kvm_vcpu_is_bsp(&svm->vcpu))
svm->vcpu.arch.apic_base |= MSR_IA32_APICBASE_BSP; svm->vcpu.arch.apic_base |= MSR_IA32_APICBASE_BSP;
return &svm->vcpu; return &svm->vcpu;
free_page4:
__free_page(hsave_page);
free_page3: free_page3:
__free_pages(nested_msrpm_pages, MSRPM_ALLOC_ORDER); __free_pages(nested_msrpm_pages, MSRPM_ALLOC_ORDER);
free_page2: free_page2:
...@@ -1488,7 +1494,7 @@ static void svm_handle_mce(struct vcpu_svm *svm) ...@@ -1488,7 +1494,7 @@ static void svm_handle_mce(struct vcpu_svm *svm)
*/ */
pr_err("KVM: Guest triggered AMD Erratum 383\n"); pr_err("KVM: Guest triggered AMD Erratum 383\n");
set_bit(KVM_REQ_TRIPLE_FAULT, &svm->vcpu.requests); kvm_make_request(KVM_REQ_TRIPLE_FAULT, &svm->vcpu);
return; return;
} }
...@@ -1535,7 +1541,7 @@ static int io_interception(struct vcpu_svm *svm) ...@@ -1535,7 +1541,7 @@ static int io_interception(struct vcpu_svm *svm)
string = (io_info & SVM_IOIO_STR_MASK) != 0; string = (io_info & SVM_IOIO_STR_MASK) != 0;
in = (io_info & SVM_IOIO_TYPE_MASK) != 0; in = (io_info & SVM_IOIO_TYPE_MASK) != 0;
if (string || in) if (string || in)
return !(emulate_instruction(vcpu, 0, 0, 0) == EMULATE_DO_MMIO); return emulate_instruction(vcpu, 0, 0, 0) == EMULATE_DONE;
port = io_info >> 16; port = io_info >> 16;
size = (io_info & SVM_IOIO_SIZE_MASK) >> SVM_IOIO_SIZE_SHIFT; size = (io_info & SVM_IOIO_SIZE_MASK) >> SVM_IOIO_SIZE_SHIFT;
...@@ -1957,7 +1963,7 @@ static int nested_svm_vmexit(struct vcpu_svm *svm) ...@@ -1957,7 +1963,7 @@ static int nested_svm_vmexit(struct vcpu_svm *svm)
svm->vmcb->save.cr3 = hsave->save.cr3; svm->vmcb->save.cr3 = hsave->save.cr3;
svm->vcpu.arch.cr3 = hsave->save.cr3; svm->vcpu.arch.cr3 = hsave->save.cr3;
} else { } else {
kvm_set_cr3(&svm->vcpu, hsave->save.cr3); (void)kvm_set_cr3(&svm->vcpu, hsave->save.cr3);
} }
kvm_register_write(&svm->vcpu, VCPU_REGS_RAX, hsave->save.rax); kvm_register_write(&svm->vcpu, VCPU_REGS_RAX, hsave->save.rax);
kvm_register_write(&svm->vcpu, VCPU_REGS_RSP, hsave->save.rsp); kvm_register_write(&svm->vcpu, VCPU_REGS_RSP, hsave->save.rsp);
...@@ -2080,7 +2086,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm) ...@@ -2080,7 +2086,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm)
svm->vmcb->save.cr3 = nested_vmcb->save.cr3; svm->vmcb->save.cr3 = nested_vmcb->save.cr3;
svm->vcpu.arch.cr3 = nested_vmcb->save.cr3; svm->vcpu.arch.cr3 = nested_vmcb->save.cr3;
} else } else
kvm_set_cr3(&svm->vcpu, nested_vmcb->save.cr3); (void)kvm_set_cr3(&svm->vcpu, nested_vmcb->save.cr3);
/* Guest paging mode is active - reset mmu */ /* Guest paging mode is active - reset mmu */
kvm_mmu_reset_context(&svm->vcpu); kvm_mmu_reset_context(&svm->vcpu);
...@@ -2386,16 +2392,12 @@ static int iret_interception(struct vcpu_svm *svm) ...@@ -2386,16 +2392,12 @@ static int iret_interception(struct vcpu_svm *svm)
static int invlpg_interception(struct vcpu_svm *svm) static int invlpg_interception(struct vcpu_svm *svm)
{ {
if (emulate_instruction(&svm->vcpu, 0, 0, 0) != EMULATE_DONE) return emulate_instruction(&svm->vcpu, 0, 0, 0) == EMULATE_DONE;
pr_unimpl(&svm->vcpu, "%s: failed\n", __func__);
return 1;
} }
static int emulate_on_interception(struct vcpu_svm *svm) static int emulate_on_interception(struct vcpu_svm *svm)
{ {
if (emulate_instruction(&svm->vcpu, 0, 0, 0) != EMULATE_DONE) return emulate_instruction(&svm->vcpu, 0, 0, 0) == EMULATE_DONE;
pr_unimpl(&svm->vcpu, "%s: failed\n", __func__);
return 1;
} }
static int cr8_write_interception(struct vcpu_svm *svm) static int cr8_write_interception(struct vcpu_svm *svm)
...@@ -2726,6 +2728,99 @@ static int (*svm_exit_handlers[])(struct vcpu_svm *svm) = { ...@@ -2726,6 +2728,99 @@ static int (*svm_exit_handlers[])(struct vcpu_svm *svm) = {
[SVM_EXIT_NPF] = pf_interception, [SVM_EXIT_NPF] = pf_interception,
}; };
void dump_vmcb(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
struct vmcb_control_area *control = &svm->vmcb->control;
struct vmcb_save_area *save = &svm->vmcb->save;
pr_err("VMCB Control Area:\n");
pr_err("cr_read: %04x\n", control->intercept_cr_read);
pr_err("cr_write: %04x\n", control->intercept_cr_write);
pr_err("dr_read: %04x\n", control->intercept_dr_read);
pr_err("dr_write: %04x\n", control->intercept_dr_write);
pr_err("exceptions: %08x\n", control->intercept_exceptions);
pr_err("intercepts: %016llx\n", control->intercept);
pr_err("pause filter count: %d\n", control->pause_filter_count);
pr_err("iopm_base_pa: %016llx\n", control->iopm_base_pa);
pr_err("msrpm_base_pa: %016llx\n", control->msrpm_base_pa);
pr_err("tsc_offset: %016llx\n", control->tsc_offset);
pr_err("asid: %d\n", control->asid);
pr_err("tlb_ctl: %d\n", control->tlb_ctl);
pr_err("int_ctl: %08x\n", control->int_ctl);
pr_err("int_vector: %08x\n", control->int_vector);
pr_err("int_state: %08x\n", control->int_state);
pr_err("exit_code: %08x\n", control->exit_code);
pr_err("exit_info1: %016llx\n", control->exit_info_1);
pr_err("exit_info2: %016llx\n", control->exit_info_2);
pr_err("exit_int_info: %08x\n", control->exit_int_info);
pr_err("exit_int_info_err: %08x\n", control->exit_int_info_err);
pr_err("nested_ctl: %lld\n", control->nested_ctl);
pr_err("nested_cr3: %016llx\n", control->nested_cr3);
pr_err("event_inj: %08x\n", control->event_inj);
pr_err("event_inj_err: %08x\n", control->event_inj_err);
pr_err("lbr_ctl: %lld\n", control->lbr_ctl);
pr_err("next_rip: %016llx\n", control->next_rip);
pr_err("VMCB State Save Area:\n");
pr_err("es: s: %04x a: %04x l: %08x b: %016llx\n",
save->es.selector, save->es.attrib,
save->es.limit, save->es.base);
pr_err("cs: s: %04x a: %04x l: %08x b: %016llx\n",
save->cs.selector, save->cs.attrib,
save->cs.limit, save->cs.base);
pr_err("ss: s: %04x a: %04x l: %08x b: %016llx\n",
save->ss.selector, save->ss.attrib,
save->ss.limit, save->ss.base);
pr_err("ds: s: %04x a: %04x l: %08x b: %016llx\n",
save->ds.selector, save->ds.attrib,
save->ds.limit, save->ds.base);
pr_err("fs: s: %04x a: %04x l: %08x b: %016llx\n",
save->fs.selector, save->fs.attrib,
save->fs.limit, save->fs.base);
pr_err("gs: s: %04x a: %04x l: %08x b: %016llx\n",
save->gs.selector, save->gs.attrib,
save->gs.limit, save->gs.base);
pr_err("gdtr: s: %04x a: %04x l: %08x b: %016llx\n",
save->gdtr.selector, save->gdtr.attrib,
save->gdtr.limit, save->gdtr.base);
pr_err("ldtr: s: %04x a: %04x l: %08x b: %016llx\n",
save->ldtr.selector, save->ldtr.attrib,
save->ldtr.limit, save->ldtr.base);
pr_err("idtr: s: %04x a: %04x l: %08x b: %016llx\n",
save->idtr.selector, save->idtr.attrib,
save->idtr.limit, save->idtr.base);
pr_err("tr: s: %04x a: %04x l: %08x b: %016llx\n",
save->tr.selector, save->tr.attrib,
save->tr.limit, save->tr.base);
pr_err("cpl: %d efer: %016llx\n",
save->cpl, save->efer);
pr_err("cr0: %016llx cr2: %016llx\n",
save->cr0, save->cr2);
pr_err("cr3: %016llx cr4: %016llx\n",
save->cr3, save->cr4);
pr_err("dr6: %016llx dr7: %016llx\n",
save->dr6, save->dr7);
pr_err("rip: %016llx rflags: %016llx\n",
save->rip, save->rflags);
pr_err("rsp: %016llx rax: %016llx\n",
save->rsp, save->rax);
pr_err("star: %016llx lstar: %016llx\n",
save->star, save->lstar);
pr_err("cstar: %016llx sfmask: %016llx\n",
save->cstar, save->sfmask);
pr_err("kernel_gs_base: %016llx sysenter_cs: %016llx\n",
save->kernel_gs_base, save->sysenter_cs);
pr_err("sysenter_esp: %016llx sysenter_eip: %016llx\n",
save->sysenter_esp, save->sysenter_eip);
pr_err("gpat: %016llx dbgctl: %016llx\n",
save->g_pat, save->dbgctl);
pr_err("br_from: %016llx br_to: %016llx\n",
save->br_from, save->br_to);
pr_err("excp_from: %016llx excp_to: %016llx\n",
save->last_excp_from, save->last_excp_to);
}
static int handle_exit(struct kvm_vcpu *vcpu) static int handle_exit(struct kvm_vcpu *vcpu)
{ {
struct vcpu_svm *svm = to_svm(vcpu); struct vcpu_svm *svm = to_svm(vcpu);
...@@ -2770,6 +2865,8 @@ static int handle_exit(struct kvm_vcpu *vcpu) ...@@ -2770,6 +2865,8 @@ static int handle_exit(struct kvm_vcpu *vcpu)
kvm_run->exit_reason = KVM_EXIT_FAIL_ENTRY; kvm_run->exit_reason = KVM_EXIT_FAIL_ENTRY;
kvm_run->fail_entry.hardware_entry_failure_reason kvm_run->fail_entry.hardware_entry_failure_reason
= svm->vmcb->control.exit_code; = svm->vmcb->control.exit_code;
pr_err("KVM: FAILED VMRUN WITH VMCB:\n");
dump_vmcb(vcpu);
return 0; return 0;
} }
...@@ -2826,9 +2923,6 @@ static inline void svm_inject_irq(struct vcpu_svm *svm, int irq) ...@@ -2826,9 +2923,6 @@ static inline void svm_inject_irq(struct vcpu_svm *svm, int irq)
{ {
struct vmcb_control_area *control; struct vmcb_control_area *control;
trace_kvm_inj_virq(irq);
++svm->vcpu.stat.irq_injections;
control = &svm->vmcb->control; control = &svm->vmcb->control;
control->int_vector = irq; control->int_vector = irq;
control->int_ctl &= ~V_INTR_PRIO_MASK; control->int_ctl &= ~V_INTR_PRIO_MASK;
...@@ -2842,6 +2936,9 @@ static void svm_set_irq(struct kvm_vcpu *vcpu) ...@@ -2842,6 +2936,9 @@ static void svm_set_irq(struct kvm_vcpu *vcpu)
BUG_ON(!(gif_set(svm))); BUG_ON(!(gif_set(svm)));
trace_kvm_inj_virq(vcpu->arch.interrupt.nr);
++vcpu->stat.irq_injections;
svm->vmcb->control.event_inj = vcpu->arch.interrupt.nr | svm->vmcb->control.event_inj = vcpu->arch.interrupt.nr |
SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_INTR; SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_INTR;
} }
...@@ -3327,6 +3424,11 @@ static bool svm_rdtscp_supported(void) ...@@ -3327,6 +3424,11 @@ static bool svm_rdtscp_supported(void)
return false; return false;
} }
static bool svm_has_wbinvd_exit(void)
{
return true;
}
static void svm_fpu_deactivate(struct kvm_vcpu *vcpu) static void svm_fpu_deactivate(struct kvm_vcpu *vcpu)
{ {
struct vcpu_svm *svm = to_svm(vcpu); struct vcpu_svm *svm = to_svm(vcpu);
...@@ -3411,6 +3513,8 @@ static struct kvm_x86_ops svm_x86_ops = { ...@@ -3411,6 +3513,8 @@ static struct kvm_x86_ops svm_x86_ops = {
.rdtscp_supported = svm_rdtscp_supported, .rdtscp_supported = svm_rdtscp_supported,
.set_supported_cpuid = svm_set_supported_cpuid, .set_supported_cpuid = svm_set_supported_cpuid,
.has_wbinvd_exit = svm_has_wbinvd_exit,
}; };
static int __init svm_init(void) static int __init svm_init(void)
......
/*
* Kernel-based Virtual Machine driver for Linux
*
* This module enables machines with Intel VT-x extensions to run virtual
* machines without emulation or binary translation.
*
* timer support
*
* Copyright 2010 Red Hat, Inc. and/or its affilates.
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*/
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include <linux/kvm.h> #include <linux/kvm.h>
#include <linux/hrtimer.h> #include <linux/hrtimer.h>
...@@ -18,7 +32,7 @@ static int __kvm_timer_fn(struct kvm_vcpu *vcpu, struct kvm_timer *ktimer) ...@@ -18,7 +32,7 @@ static int __kvm_timer_fn(struct kvm_vcpu *vcpu, struct kvm_timer *ktimer)
if (ktimer->reinject || !atomic_read(&ktimer->pending)) { if (ktimer->reinject || !atomic_read(&ktimer->pending)) {
atomic_inc(&ktimer->pending); atomic_inc(&ktimer->pending);
/* FIXME: this code should not know anything about vcpus */ /* FIXME: this code should not know anything about vcpus */
set_bit(KVM_REQ_PENDING_TIMER, &vcpu->requests); kvm_make_request(KVM_REQ_PENDING_TIMER, vcpu);
} }
if (waitqueue_active(q)) if (waitqueue_active(q))
......
This diff is collapsed.
This diff is collapsed.
...@@ -65,13 +65,6 @@ static inline int is_paging(struct kvm_vcpu *vcpu) ...@@ -65,13 +65,6 @@ static inline int is_paging(struct kvm_vcpu *vcpu)
return kvm_read_cr0_bits(vcpu, X86_CR0_PG); return kvm_read_cr0_bits(vcpu, X86_CR0_PG);
} }
static inline struct kvm_mem_aliases *kvm_aliases(struct kvm *kvm)
{
return rcu_dereference_check(kvm->arch.aliases,
srcu_read_lock_held(&kvm->srcu)
|| lockdep_is_held(&kvm->slots_lock));
}
void kvm_before_handle_nmi(struct kvm_vcpu *vcpu); void kvm_before_handle_nmi(struct kvm_vcpu *vcpu);
void kvm_after_handle_nmi(struct kvm_vcpu *vcpu); void kvm_after_handle_nmi(struct kvm_vcpu *vcpu);
......
...@@ -524,6 +524,12 @@ struct kvm_enable_cap { ...@@ -524,6 +524,12 @@ struct kvm_enable_cap {
#define KVM_CAP_PPC_OSI 52 #define KVM_CAP_PPC_OSI 52
#define KVM_CAP_PPC_UNSET_IRQ 53 #define KVM_CAP_PPC_UNSET_IRQ 53
#define KVM_CAP_ENABLE_CAP 54 #define KVM_CAP_ENABLE_CAP 54
#ifdef __KVM_HAVE_XSAVE
#define KVM_CAP_XSAVE 55
#endif
#ifdef __KVM_HAVE_XCRS
#define KVM_CAP_XCRS 56
#endif
#ifdef KVM_CAP_IRQ_ROUTING #ifdef KVM_CAP_IRQ_ROUTING
...@@ -613,6 +619,7 @@ struct kvm_clock_data { ...@@ -613,6 +619,7 @@ struct kvm_clock_data {
*/ */
#define KVM_CREATE_VCPU _IO(KVMIO, 0x41) #define KVM_CREATE_VCPU _IO(KVMIO, 0x41)
#define KVM_GET_DIRTY_LOG _IOW(KVMIO, 0x42, struct kvm_dirty_log) #define KVM_GET_DIRTY_LOG _IOW(KVMIO, 0x42, struct kvm_dirty_log)
/* KVM_SET_MEMORY_ALIAS is obsolete: */
#define KVM_SET_MEMORY_ALIAS _IOW(KVMIO, 0x43, struct kvm_memory_alias) #define KVM_SET_MEMORY_ALIAS _IOW(KVMIO, 0x43, struct kvm_memory_alias)
#define KVM_SET_NR_MMU_PAGES _IO(KVMIO, 0x44) #define KVM_SET_NR_MMU_PAGES _IO(KVMIO, 0x44)
#define KVM_GET_NR_MMU_PAGES _IO(KVMIO, 0x45) #define KVM_GET_NR_MMU_PAGES _IO(KVMIO, 0x45)
...@@ -714,6 +721,12 @@ struct kvm_clock_data { ...@@ -714,6 +721,12 @@ struct kvm_clock_data {
#define KVM_GET_DEBUGREGS _IOR(KVMIO, 0xa1, struct kvm_debugregs) #define KVM_GET_DEBUGREGS _IOR(KVMIO, 0xa1, struct kvm_debugregs)
#define KVM_SET_DEBUGREGS _IOW(KVMIO, 0xa2, struct kvm_debugregs) #define KVM_SET_DEBUGREGS _IOW(KVMIO, 0xa2, struct kvm_debugregs)
#define KVM_ENABLE_CAP _IOW(KVMIO, 0xa3, struct kvm_enable_cap) #define KVM_ENABLE_CAP _IOW(KVMIO, 0xa3, struct kvm_enable_cap)
/* Available with KVM_CAP_XSAVE */
#define KVM_GET_XSAVE _IOR(KVMIO, 0xa4, struct kvm_xsave)
#define KVM_SET_XSAVE _IOW(KVMIO, 0xa5, struct kvm_xsave)
/* Available with KVM_CAP_XCRS */
#define KVM_GET_XCRS _IOR(KVMIO, 0xa6, struct kvm_xcrs)
#define KVM_SET_XCRS _IOW(KVMIO, 0xa7, struct kvm_xcrs)
#define KVM_DEV_ASSIGN_ENABLE_IOMMU (1 << 0) #define KVM_DEV_ASSIGN_ENABLE_IOMMU (1 << 0)
......
...@@ -81,13 +81,14 @@ struct kvm_vcpu { ...@@ -81,13 +81,14 @@ struct kvm_vcpu {
int vcpu_id; int vcpu_id;
struct mutex mutex; struct mutex mutex;
int cpu; int cpu;
atomic_t guest_mode;
struct kvm_run *run; struct kvm_run *run;
unsigned long requests; unsigned long requests;
unsigned long guest_debug; unsigned long guest_debug;
int srcu_idx; int srcu_idx;
int fpu_active; int fpu_active;
int guest_fpu_loaded; int guest_fpu_loaded, guest_xcr0_loaded;
wait_queue_head_t wq; wait_queue_head_t wq;
int sigset_active; int sigset_active;
sigset_t sigset; sigset_t sigset;
...@@ -123,6 +124,7 @@ struct kvm_memory_slot { ...@@ -123,6 +124,7 @@ struct kvm_memory_slot {
} *lpage_info[KVM_NR_PAGE_SIZES - 1]; } *lpage_info[KVM_NR_PAGE_SIZES - 1];
unsigned long userspace_addr; unsigned long userspace_addr;
int user_alloc; int user_alloc;
int id;
}; };
static inline unsigned long kvm_dirty_bitmap_bytes(struct kvm_memory_slot *memslot) static inline unsigned long kvm_dirty_bitmap_bytes(struct kvm_memory_slot *memslot)
...@@ -266,6 +268,8 @@ extern pfn_t bad_pfn; ...@@ -266,6 +268,8 @@ extern pfn_t bad_pfn;
int is_error_page(struct page *page); int is_error_page(struct page *page);
int is_error_pfn(pfn_t pfn); int is_error_pfn(pfn_t pfn);
int is_hwpoison_pfn(pfn_t pfn);
int is_fault_pfn(pfn_t pfn);
int kvm_is_error_hva(unsigned long addr); int kvm_is_error_hva(unsigned long addr);
int kvm_set_memory_region(struct kvm *kvm, int kvm_set_memory_region(struct kvm *kvm,
struct kvm_userspace_memory_region *mem, struct kvm_userspace_memory_region *mem,
...@@ -284,8 +288,6 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, ...@@ -284,8 +288,6 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
int user_alloc); int user_alloc);
void kvm_disable_largepages(void); void kvm_disable_largepages(void);
void kvm_arch_flush_shadow(struct kvm *kvm); void kvm_arch_flush_shadow(struct kvm *kvm);
gfn_t unalias_gfn(struct kvm *kvm, gfn_t gfn);
gfn_t unalias_gfn_instantiation(struct kvm *kvm, gfn_t gfn);
struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn); struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn);
unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn); unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn);
...@@ -445,7 +447,8 @@ void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq, ...@@ -445,7 +447,8 @@ void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq,
struct kvm_irq_mask_notifier *kimn); struct kvm_irq_mask_notifier *kimn);
void kvm_unregister_irq_mask_notifier(struct kvm *kvm, int irq, void kvm_unregister_irq_mask_notifier(struct kvm *kvm, int irq,
struct kvm_irq_mask_notifier *kimn); struct kvm_irq_mask_notifier *kimn);
void kvm_fire_mask_notifiers(struct kvm *kvm, int irq, bool mask); void kvm_fire_mask_notifiers(struct kvm *kvm, unsigned irqchip, unsigned pin,
bool mask);
#ifdef __KVM_HAVE_IOAPIC #ifdef __KVM_HAVE_IOAPIC
void kvm_get_intr_delivery_bitmask(struct kvm_ioapic *ioapic, void kvm_get_intr_delivery_bitmask(struct kvm_ioapic *ioapic,
...@@ -562,10 +565,6 @@ static inline int mmu_notifier_retry(struct kvm_vcpu *vcpu, unsigned long mmu_se ...@@ -562,10 +565,6 @@ static inline int mmu_notifier_retry(struct kvm_vcpu *vcpu, unsigned long mmu_se
} }
#endif #endif
#ifndef KVM_ARCH_HAS_UNALIAS_INSTANTIATION
#define unalias_gfn_instantiation unalias_gfn
#endif
#ifdef CONFIG_HAVE_KVM_IRQCHIP #ifdef CONFIG_HAVE_KVM_IRQCHIP
#define KVM_MAX_IRQ_ROUTES 1024 #define KVM_MAX_IRQ_ROUTES 1024
...@@ -628,5 +627,25 @@ static inline long kvm_vm_ioctl_assigned_device(struct kvm *kvm, unsigned ioctl, ...@@ -628,5 +627,25 @@ static inline long kvm_vm_ioctl_assigned_device(struct kvm *kvm, unsigned ioctl,
#endif #endif
static inline void kvm_make_request(int req, struct kvm_vcpu *vcpu)
{
set_bit(req, &vcpu->requests);
}
static inline bool kvm_make_check_request(int req, struct kvm_vcpu *vcpu)
{
return test_and_set_bit(req, &vcpu->requests);
}
static inline bool kvm_check_request(int req, struct kvm_vcpu *vcpu)
{
if (test_bit(req, &vcpu->requests)) {
clear_bit(req, &vcpu->requests);
return true;
} else {
return false;
}
}
#endif #endif
...@@ -32,11 +32,11 @@ ...@@ -32,11 +32,11 @@
typedef unsigned long gva_t; typedef unsigned long gva_t;
typedef u64 gpa_t; typedef u64 gpa_t;
typedef unsigned long gfn_t; typedef u64 gfn_t;
typedef unsigned long hva_t; typedef unsigned long hva_t;
typedef u64 hpa_t; typedef u64 hpa_t;
typedef unsigned long hfn_t; typedef u64 hfn_t;
typedef hfn_t pfn_t; typedef hfn_t pfn_t;
......
...@@ -1465,6 +1465,14 @@ extern int sysctl_memory_failure_recovery; ...@@ -1465,6 +1465,14 @@ extern int sysctl_memory_failure_recovery;
extern void shake_page(struct page *p, int access); extern void shake_page(struct page *p, int access);
extern atomic_long_t mce_bad_pages; extern atomic_long_t mce_bad_pages;
extern int soft_offline_page(struct page *page, int flags); extern int soft_offline_page(struct page *page, int flags);
#ifdef CONFIG_MEMORY_FAILURE
int is_hwpoison_address(unsigned long addr);
#else
static inline int is_hwpoison_address(unsigned long addr)
{
return 0;
}
#endif
extern void dump_page(struct page *page); extern void dump_page(struct page *page);
......
This diff is collapsed.
This diff is collapsed.
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
* KVM coalesced MMIO * KVM coalesced MMIO
* *
* Copyright (c) 2008 Bull S.A.S. * Copyright (c) 2008 Bull S.A.S.
* Copyright 2009 Red Hat, Inc. and/or its affiliates.
* *
* Author: Laurent Vivier <Laurent.Vivier@bull.net> * Author: Laurent Vivier <Laurent.Vivier@bull.net>
* *
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment