Commit 7ebd3faa authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull KVM updates from Paolo Bonzini:
 "First round of KVM updates for 3.14; PPC parts will come next week.

  Nothing major here, just bugfixes all over the place.  The most
  interesting part is the ARM guys' virtualized interrupt controller
  overhaul, which lets userspace get/set the state and thus enables
  migration of ARM VMs"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (67 commits)
  kvm: make KVM_MMU_AUDIT help text more readable
  KVM: s390: Fix memory access error detection
  KVM: nVMX: Update guest activity state field on L2 exits
  KVM: nVMX: Fix nested_run_pending on activity state HLT
  KVM: nVMX: Clean up handling of VMX-related MSRs
  KVM: nVMX: Add tracepoints for nested_vmexit and nested_vmexit_inject
  KVM: nVMX: Pass vmexit parameters to nested_vmx_vmexit
  KVM: nVMX: Leave VMX mode on clearing of feature control MSR
  KVM: VMX: Fix DR6 update on #DB exception
  KVM: SVM: Fix reading of DR6
  KVM: x86: Sync DR7 on KVM_SET_DEBUGREGS
  add support for Hyper-V reference time counter
  KVM: remove useless write to vcpu->hv_clock.tsc_timestamp
  KVM: x86: fix tsc catchup issue with tsc scaling
  KVM: x86: limit PIT timer frequency
  KVM: x86: handle invalid root_hpa everywhere
  kvm: Provide kvm_vcpu_eligible_for_directed_yield() stub
  kvm: vfio: silence GCC warning
  KVM: ARM: Remove duplicate include
  arm/arm64: KVM: relax the requirements of VMA alignment for THP
  ...
parents bb1281f2 7650b687
...@@ -2104,7 +2104,7 @@ Returns: 0 on success, -1 on error ...@@ -2104,7 +2104,7 @@ Returns: 0 on success, -1 on error
Allows setting an eventfd to directly trigger a guest interrupt. Allows setting an eventfd to directly trigger a guest interrupt.
kvm_irqfd.fd specifies the file descriptor to use as the eventfd and kvm_irqfd.fd specifies the file descriptor to use as the eventfd and
kvm_irqfd.gsi specifies the irqchip pin toggled by this event. When kvm_irqfd.gsi specifies the irqchip pin toggled by this event. When
an event is tiggered on the eventfd, an interrupt is injected into an event is triggered on the eventfd, an interrupt is injected into
the guest using the specified gsi pin. The irqfd is removed using the guest using the specified gsi pin. The irqfd is removed using
the KVM_IRQFD_FLAG_DEASSIGN flag, specifying both kvm_irqfd.fd the KVM_IRQFD_FLAG_DEASSIGN flag, specifying both kvm_irqfd.fd
and kvm_irqfd.gsi. and kvm_irqfd.gsi.
...@@ -2115,7 +2115,7 @@ interrupts. When KVM_IRQFD_FLAG_RESAMPLE is set the user must pass an ...@@ -2115,7 +2115,7 @@ interrupts. When KVM_IRQFD_FLAG_RESAMPLE is set the user must pass an
additional eventfd in the kvm_irqfd.resamplefd field. When operating additional eventfd in the kvm_irqfd.resamplefd field. When operating
in resample mode, posting of an interrupt through kvm_irq.fd asserts in resample mode, posting of an interrupt through kvm_irq.fd asserts
the specified gsi in the irqchip. When the irqchip is resampled, such the specified gsi in the irqchip. When the irqchip is resampled, such
as from an EOI, the gsi is de-asserted and the user is notifed via as from an EOI, the gsi is de-asserted and the user is notified via
kvm_irqfd.resamplefd. It is the user's responsibility to re-queue kvm_irqfd.resamplefd. It is the user's responsibility to re-queue
the interrupt if the device making use of it still requires service. the interrupt if the device making use of it still requires service.
Note that closing the resamplefd is not sufficient to disable the Note that closing the resamplefd is not sufficient to disable the
...@@ -2327,7 +2327,7 @@ current state. "addr" is ignored. ...@@ -2327,7 +2327,7 @@ current state. "addr" is ignored.
Capability: basic Capability: basic
Architectures: arm, arm64 Architectures: arm, arm64
Type: vcpu ioctl Type: vcpu ioctl
Parameters: struct struct kvm_vcpu_init (in) Parameters: struct kvm_vcpu_init (in)
Returns: 0 on success; -1 on error Returns: 0 on success; -1 on error
Errors: Errors:
 EINVAL:    the target is unknown, or the combination of features is invalid.  EINVAL:    the target is unknown, or the combination of features is invalid.
...@@ -2391,7 +2391,8 @@ struct kvm_reg_list { ...@@ -2391,7 +2391,8 @@ struct kvm_reg_list {
This ioctl returns the guest registers that are supported for the This ioctl returns the guest registers that are supported for the
KVM_GET_ONE_REG/KVM_SET_ONE_REG calls. KVM_GET_ONE_REG/KVM_SET_ONE_REG calls.
4.85 KVM_ARM_SET_DEVICE_ADDR
4.85 KVM_ARM_SET_DEVICE_ADDR (deprecated)
Capability: KVM_CAP_ARM_SET_DEVICE_ADDR Capability: KVM_CAP_ARM_SET_DEVICE_ADDR
Architectures: arm, arm64 Architectures: arm, arm64
...@@ -2429,6 +2430,10 @@ must be called after calling KVM_CREATE_IRQCHIP, but before calling ...@@ -2429,6 +2430,10 @@ must be called after calling KVM_CREATE_IRQCHIP, but before calling
KVM_RUN on any of the VCPUs. Calling this ioctl twice for any of the KVM_RUN on any of the VCPUs. Calling this ioctl twice for any of the
base addresses will return -EEXIST. base addresses will return -EEXIST.
Note, this IOCTL is deprecated and the more flexible SET/GET_DEVICE_ATTR API
should be used instead.
4.86 KVM_PPC_RTAS_DEFINE_TOKEN 4.86 KVM_PPC_RTAS_DEFINE_TOKEN
Capability: KVM_CAP_PPC_RTAS Capability: KVM_CAP_PPC_RTAS
......
ARM Virtual Generic Interrupt Controller (VGIC)
===============================================
Device types supported:
KVM_DEV_TYPE_ARM_VGIC_V2 ARM Generic Interrupt Controller v2.0
Only one VGIC instance may be instantiated through either this API or the
legacy KVM_CREATE_IRQCHIP api. The created VGIC will act as the VM interrupt
controller, requiring emulated user-space devices to inject interrupts to the
VGIC instead of directly to CPUs.
Groups:
KVM_DEV_ARM_VGIC_GRP_ADDR
Attributes:
KVM_VGIC_V2_ADDR_TYPE_DIST (rw, 64-bit)
Base address in the guest physical address space of the GIC distributor
register mappings.
KVM_VGIC_V2_ADDR_TYPE_CPU (rw, 64-bit)
Base address in the guest physical address space of the GIC virtual cpu
interface register mappings.
KVM_DEV_ARM_VGIC_GRP_DIST_REGS
Attributes:
The attr field of kvm_device_attr encodes two values:
bits: | 63 .... 40 | 39 .. 32 | 31 .... 0 |
values: | reserved | cpu id | offset |
All distributor regs are (rw, 32-bit)
The offset is relative to the "Distributor base address" as defined in the
GICv2 specs. Getting or setting such a register has the same effect as
reading or writing the register on the actual hardware from the cpu
specified with cpu id field. Note that most distributor fields are not
banked, but return the same value regardless of the cpu id used to access
the register.
Limitations:
- Priorities are not implemented, and registers are RAZ/WI
Errors:
-ENODEV: Getting or setting this register is not yet supported
-EBUSY: One or more VCPUs are running
KVM_DEV_ARM_VGIC_GRP_CPU_REGS
Attributes:
The attr field of kvm_device_attr encodes two values:
bits: | 63 .... 40 | 39 .. 32 | 31 .... 0 |
values: | reserved | cpu id | offset |
All CPU interface regs are (rw, 32-bit)
The offset specifies the offset from the "CPU interface base address" as
defined in the GICv2 specs. Getting or setting such a register has the
same effect as reading or writing the register on the actual hardware.
The Active Priorities Registers APRn are implementation defined, so we set a
fixed format for our implementation that fits with the model of a "GICv2
implementation without the security extensions" which we present to the
guest. This interface always exposes four register APR[0-3] describing the
maximum possible 128 preemption levels. The semantics of the register
indicate if any interrupts in a given preemption level are in the active
state by setting the corresponding bit.
Thus, preemption level X has one or more active interrupts if and only if:
APRn[X mod 32] == 0b1, where n = X / 32
Bits for undefined preemption levels are RAZ/WI.
Limitations:
- Priorities are not implemented, and registers are RAZ/WI
Errors:
-ENODEV: Getting or setting this register is not yet supported
-EBUSY: One or more VCPUs are running
...@@ -17,6 +17,9 @@ S390: ...@@ -17,6 +17,9 @@ S390:
S390 uses diagnose instruction as hypercall (0x500) along with hypercall S390 uses diagnose instruction as hypercall (0x500) along with hypercall
number in R1. number in R1.
For further information on the S390 diagnose call as supported by KVM,
refer to Documentation/virtual/kvm/s390-diag.txt.
PowerPC: PowerPC:
It uses R3-R10 and hypercall number in R11. R4-R11 are used as output registers. It uses R3-R10 and hypercall number in R11. R4-R11 are used as output registers.
Return value is placed in R3. Return value is placed in R3.
...@@ -74,7 +77,7 @@ Usage example : A vcpu of a paravirtualized guest that is busywaiting in guest ...@@ -74,7 +77,7 @@ Usage example : A vcpu of a paravirtualized guest that is busywaiting in guest
kernel mode for an event to occur (ex: a spinlock to become available) can kernel mode for an event to occur (ex: a spinlock to become available) can
execute HLT instruction once it has busy-waited for more than a threshold execute HLT instruction once it has busy-waited for more than a threshold
time-interval. Execution of HLT instruction would cause the hypervisor to put time-interval. Execution of HLT instruction would cause the hypervisor to put
the vcpu to sleep until occurence of an appropriate event. Another vcpu of the the vcpu to sleep until occurrence of an appropriate event. Another vcpu of the
same guest can wakeup the sleeping vcpu by issuing KVM_HC_KICK_CPU hypercall, same guest can wakeup the sleeping vcpu by issuing KVM_HC_KICK_CPU hypercall,
specifying APIC ID (a1) of the vcpu to be woken up. An additional argument (a0) specifying APIC ID (a1) of the vcpu to be woken up. An additional argument (a0)
is used in the hypercall for future use. is used in the hypercall for future use.
...@@ -112,7 +112,7 @@ The Dirty bit is lost in this case. ...@@ -112,7 +112,7 @@ The Dirty bit is lost in this case.
In order to avoid this kind of issue, we always treat the spte as "volatile" In order to avoid this kind of issue, we always treat the spte as "volatile"
if it can be updated out of mmu-lock, see spte_has_volatile_bits(), it means, if it can be updated out of mmu-lock, see spte_has_volatile_bits(), it means,
the spte is always atomicly updated in this case. the spte is always atomically updated in this case.
3): flush tlbs due to spte updated 3): flush tlbs due to spte updated
If the spte is updated from writable to readonly, we should flush all TLBs, If the spte is updated from writable to readonly, we should flush all TLBs,
...@@ -125,7 +125,7 @@ be flushed caused by this reason in mmu_spte_update() since this is a common ...@@ -125,7 +125,7 @@ be flushed caused by this reason in mmu_spte_update() since this is a common
function to update spte (present -> present). function to update spte (present -> present).
Since the spte is "volatile" if it can be updated out of mmu-lock, we always Since the spte is "volatile" if it can be updated out of mmu-lock, we always
atomicly update the spte, the race caused by fast page fault can be avoided, atomically update the spte, the race caused by fast page fault can be avoided,
See the comments in spte_has_volatile_bits() and mmu_spte_update(). See the comments in spte_has_volatile_bits() and mmu_spte_update().
3. Reference 3. Reference
......
...@@ -115,7 +115,7 @@ If any other bit changes in the MSR, please still use mtmsr(d). ...@@ -115,7 +115,7 @@ If any other bit changes in the MSR, please still use mtmsr(d).
Patched instructions Patched instructions
==================== ====================
The "ld" and "std" instructions are transormed to "lwz" and "stw" instructions The "ld" and "std" instructions are transformed to "lwz" and "stw" instructions
respectively on 32 bit systems with an added offset of 4 to accommodate for big respectively on 32 bit systems with an added offset of 4 to accommodate for big
endianness. endianness.
......
The s390 DIAGNOSE call on KVM
=============================
KVM on s390 supports the DIAGNOSE call for making hypercalls, both for
native hypercalls and for selected hypercalls found on other s390
hypervisors.
Note that bits are numbered as by the usual s390 convention (most significant
bit on the left).
General remarks
---------------
DIAGNOSE calls by the guest cause a mandatory intercept. This implies
all supported DIAGNOSE calls need to be handled by either KVM or its
userspace.
All DIAGNOSE calls supported by KVM use the RS-a format:
--------------------------------------
| '83' | R1 | R3 | B2 | D2 |
--------------------------------------
0 8 12 16 20 31
The second-operand address (obtained by the base/displacement calculation)
is not used to address data. Instead, bits 48-63 of this address specify
the function code, and bits 0-47 are ignored.
The supported DIAGNOSE function codes vary by the userspace used. For
DIAGNOSE function codes not specific to KVM, please refer to the
documentation for the s390 hypervisors defining them.
DIAGNOSE function code 'X'500' - KVM virtio functions
-----------------------------------------------------
If the function code specifies 0x500, various virtio-related functions
are performed.
General register 1 contains the virtio subfunction code. Supported
virtio subfunctions depend on KVM's userspace. Generally, userspace
provides either s390-virtio (subcodes 0-2) or virtio-ccw (subcode 3).
Upon completion of the DIAGNOSE instruction, general register 2 contains
the function's return code, which is either a return code or a subcode
specific value.
Subcode 0 - s390-virtio notification and early console printk
Handled by userspace.
Subcode 1 - s390-virtio reset
Handled by userspace.
Subcode 2 - s390-virtio set status
Handled by userspace.
Subcode 3 - virtio-ccw notification
Handled by either userspace or KVM (ioeventfd case).
General register 2 contains a subchannel-identification word denoting
the subchannel of the virtio-ccw proxy device to be notified.
General register 3 contains the number of the virtqueue to be notified.
General register 4 contains a 64bit identifier for KVM usage (the
kvm_io_bus cookie). If general register 4 does not contain a valid
identifier, it is ignored.
After completion of the DIAGNOSE call, general register 2 may contain
a 64bit identifier (in the kvm_io_bus cookie case).
See also the virtio standard for a discussion of this hypercall.
DIAGNOSE function code 'X'501 - KVM breakpoint
----------------------------------------------
If the function code specifies 0x501, breakpoint functions may be performed.
This function code is handled by userspace.
...@@ -467,7 +467,7 @@ at any time. This causes problems as the passage of real time, the injection ...@@ -467,7 +467,7 @@ at any time. This causes problems as the passage of real time, the injection
of machine interrupts and the associated clock sources are no longer completely of machine interrupts and the associated clock sources are no longer completely
synchronized with real time. synchronized with real time.
This same problem can occur on native harware to a degree, as SMM mode may This same problem can occur on native hardware to a degree, as SMM mode may
steal cycles from the naturally on X86 systems when SMM mode is used by the steal cycles from the naturally on X86 systems when SMM mode is used by the
BIOS, but not in such an extreme fashion. However, the fact that SMM mode may BIOS, but not in such an extreme fashion. However, the fact that SMM mode may
cause similar problems to virtualization makes it a good justification for cause similar problems to virtualization makes it a good justification for
......
...@@ -4915,7 +4915,7 @@ F: include/linux/sunrpc/ ...@@ -4915,7 +4915,7 @@ F: include/linux/sunrpc/
F: include/uapi/linux/sunrpc/ F: include/uapi/linux/sunrpc/
KERNEL VIRTUAL MACHINE (KVM) KERNEL VIRTUAL MACHINE (KVM)
M: Gleb Natapov <gleb@redhat.com> M: Gleb Natapov <gleb@kernel.org>
M: Paolo Bonzini <pbonzini@redhat.com> M: Paolo Bonzini <pbonzini@redhat.com>
L: kvm@vger.kernel.org L: kvm@vger.kernel.org
W: http://www.linux-kvm.org W: http://www.linux-kvm.org
......
...@@ -225,4 +225,7 @@ static inline int kvm_arch_dev_ioctl_check_extension(long ext) ...@@ -225,4 +225,7 @@ static inline int kvm_arch_dev_ioctl_check_extension(long ext)
int kvm_perf_init(void); int kvm_perf_init(void);
int kvm_perf_teardown(void); int kvm_perf_teardown(void);
u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid);
int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value);
#endif /* __ARM_KVM_HOST_H__ */ #endif /* __ARM_KVM_HOST_H__ */
...@@ -140,6 +140,7 @@ static inline void coherent_icache_guest_page(struct kvm *kvm, hva_t hva, ...@@ -140,6 +140,7 @@ static inline void coherent_icache_guest_page(struct kvm *kvm, hva_t hva,
} }
#define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l)) #define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l))
#define kvm_virt_to_phys(x) virt_to_idmap((unsigned long)(x))
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
......
...@@ -119,6 +119,26 @@ struct kvm_arch_memory_slot { ...@@ -119,6 +119,26 @@ struct kvm_arch_memory_slot {
#define KVM_REG_ARM_32_CRN_MASK 0x0000000000007800 #define KVM_REG_ARM_32_CRN_MASK 0x0000000000007800
#define KVM_REG_ARM_32_CRN_SHIFT 11 #define KVM_REG_ARM_32_CRN_SHIFT 11
#define ARM_CP15_REG_SHIFT_MASK(x,n) \
(((x) << KVM_REG_ARM_ ## n ## _SHIFT) & KVM_REG_ARM_ ## n ## _MASK)
#define __ARM_CP15_REG(op1,crn,crm,op2) \
(KVM_REG_ARM | (15 << KVM_REG_ARM_COPROC_SHIFT) | \
ARM_CP15_REG_SHIFT_MASK(op1, OPC1) | \
ARM_CP15_REG_SHIFT_MASK(crn, 32_CRN) | \
ARM_CP15_REG_SHIFT_MASK(crm, CRM) | \
ARM_CP15_REG_SHIFT_MASK(op2, 32_OPC2))
#define ARM_CP15_REG32(...) (__ARM_CP15_REG(__VA_ARGS__) | KVM_REG_SIZE_U32)
#define __ARM_CP15_REG64(op1,crm) \
(__ARM_CP15_REG(op1, 0, crm, 0) | KVM_REG_SIZE_U64)
#define ARM_CP15_REG64(...) __ARM_CP15_REG64(__VA_ARGS__)
#define KVM_REG_ARM_TIMER_CTL ARM_CP15_REG32(0, 14, 3, 1)
#define KVM_REG_ARM_TIMER_CNT ARM_CP15_REG64(1, 14)
#define KVM_REG_ARM_TIMER_CVAL ARM_CP15_REG64(3, 14)
/* Normal registers are mapped as coprocessor 16. */ /* Normal registers are mapped as coprocessor 16. */
#define KVM_REG_ARM_CORE (0x0010 << KVM_REG_ARM_COPROC_SHIFT) #define KVM_REG_ARM_CORE (0x0010 << KVM_REG_ARM_COPROC_SHIFT)
#define KVM_REG_ARM_CORE_REG(name) (offsetof(struct kvm_regs, name) / 4) #define KVM_REG_ARM_CORE_REG(name) (offsetof(struct kvm_regs, name) / 4)
...@@ -143,6 +163,14 @@ struct kvm_arch_memory_slot { ...@@ -143,6 +163,14 @@ struct kvm_arch_memory_slot {
#define KVM_REG_ARM_VFP_FPINST 0x1009 #define KVM_REG_ARM_VFP_FPINST 0x1009
#define KVM_REG_ARM_VFP_FPINST2 0x100A #define KVM_REG_ARM_VFP_FPINST2 0x100A
/* Device Control API: ARM VGIC */
#define KVM_DEV_ARM_VGIC_GRP_ADDR 0
#define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1
#define KVM_DEV_ARM_VGIC_GRP_CPU_REGS 2
#define KVM_DEV_ARM_VGIC_CPUID_SHIFT 32
#define KVM_DEV_ARM_VGIC_CPUID_MASK (0xffULL << KVM_DEV_ARM_VGIC_CPUID_SHIFT)
#define KVM_DEV_ARM_VGIC_OFFSET_SHIFT 0
#define KVM_DEV_ARM_VGIC_OFFSET_MASK (0xffffffffULL << KVM_DEV_ARM_VGIC_OFFSET_SHIFT)
/* KVM_IRQ_LINE irq field index values */ /* KVM_IRQ_LINE irq field index values */
#define KVM_ARM_IRQ_TYPE_SHIFT 24 #define KVM_ARM_IRQ_TYPE_SHIFT 24
......
...@@ -138,6 +138,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) ...@@ -138,6 +138,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
if (ret) if (ret)
goto out_free_stage2_pgd; goto out_free_stage2_pgd;
kvm_timer_init(kvm);
/* Mark the initial VMID generation invalid */ /* Mark the initial VMID generation invalid */
kvm->arch.vmid_gen = 0; kvm->arch.vmid_gen = 0;
...@@ -189,6 +191,7 @@ int kvm_dev_ioctl_check_extension(long ext) ...@@ -189,6 +191,7 @@ int kvm_dev_ioctl_check_extension(long ext)
case KVM_CAP_IRQCHIP: case KVM_CAP_IRQCHIP:
r = vgic_present; r = vgic_present;
break; break;
case KVM_CAP_DEVICE_CTRL:
case KVM_CAP_USER_MEMORY: case KVM_CAP_USER_MEMORY:
case KVM_CAP_SYNC_MMU: case KVM_CAP_SYNC_MMU:
case KVM_CAP_DESTROY_MEMORY_REGION_WORKS: case KVM_CAP_DESTROY_MEMORY_REGION_WORKS:
...@@ -340,6 +343,13 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) ...@@ -340,6 +343,13 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
{ {
/*
* The arch-generic KVM code expects the cpu field of a vcpu to be -1
* if the vcpu is no longer assigned to a cpu. This is used for the
* optimized make_all_cpus_request path.
*/
vcpu->cpu = -1;
kvm_arm_set_running_vcpu(NULL); kvm_arm_set_running_vcpu(NULL);
} }
...@@ -463,6 +473,8 @@ static void update_vttbr(struct kvm *kvm) ...@@ -463,6 +473,8 @@ static void update_vttbr(struct kvm *kvm)
static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
{ {
int ret;
if (likely(vcpu->arch.has_run_once)) if (likely(vcpu->arch.has_run_once))
return 0; return 0;
...@@ -472,22 +484,12 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) ...@@ -472,22 +484,12 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
* Initialize the VGIC before running a vcpu the first time on * Initialize the VGIC before running a vcpu the first time on
* this VM. * this VM.
*/ */
if (irqchip_in_kernel(vcpu->kvm) && if (unlikely(!vgic_initialized(vcpu->kvm))) {
unlikely(!vgic_initialized(vcpu->kvm))) { ret = kvm_vgic_init(vcpu->kvm);
int ret = kvm_vgic_init(vcpu->kvm);
if (ret) if (ret)
return ret; return ret;
} }
/*
* Handle the "start in power-off" case by calling into the
* PSCI code.
*/
if (test_and_clear_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features)) {
*vcpu_reg(vcpu, 0) = KVM_PSCI_FN_CPU_OFF;
kvm_psci_call(vcpu);
}
return 0; return 0;
} }
...@@ -701,6 +703,24 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level, ...@@ -701,6 +703,24 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
return -EINVAL; return -EINVAL;
} }
static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
struct kvm_vcpu_init *init)
{
int ret;
ret = kvm_vcpu_set_target(vcpu, init);
if (ret)
return ret;
/*
* Handle the "start in power-off" case by marking the VCPU as paused.
*/
if (__test_and_clear_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
vcpu->arch.pause = true;
return 0;
}
long kvm_arch_vcpu_ioctl(struct file *filp, long kvm_arch_vcpu_ioctl(struct file *filp,
unsigned int ioctl, unsigned long arg) unsigned int ioctl, unsigned long arg)
{ {
...@@ -714,8 +734,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp, ...@@ -714,8 +734,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
if (copy_from_user(&init, argp, sizeof(init))) if (copy_from_user(&init, argp, sizeof(init)))
return -EFAULT; return -EFAULT;
return kvm_vcpu_set_target(vcpu, &init); return kvm_arch_vcpu_ioctl_vcpu_init(vcpu, &init);
} }
case KVM_SET_ONE_REG: case KVM_SET_ONE_REG:
case KVM_GET_ONE_REG: { case KVM_GET_ONE_REG: {
...@@ -773,7 +792,7 @@ static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm, ...@@ -773,7 +792,7 @@ static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm,
case KVM_ARM_DEVICE_VGIC_V2: case KVM_ARM_DEVICE_VGIC_V2:
if (!vgic_present) if (!vgic_present)
return -ENXIO; return -ENXIO;
return kvm_vgic_set_addr(kvm, type, dev_addr->addr); return kvm_vgic_addr(kvm, type, &dev_addr->addr, true);
default: default:
return -ENODEV; return -ENODEV;
} }
......
...@@ -109,6 +109,83 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -109,6 +109,83 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
return -EINVAL; return -EINVAL;
} }
#ifndef CONFIG_KVM_ARM_TIMER
#define NUM_TIMER_REGS 0
static int copy_timer_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
{
return 0;
}
static bool is_timer_reg(u64 index)
{
return false;
}
int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
{
return 0;
}
u64 kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, u64 regid)
{
return 0;
}
#else
#define NUM_TIMER_REGS 3
static bool is_timer_reg(u64 index)
{
switch (index) {
case KVM_REG_ARM_TIMER_CTL:
case KVM_REG_ARM_TIMER_CNT:
case KVM_REG_ARM_TIMER_CVAL:
return true;
}
return false;
}
static int copy_timer_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
{
if (put_user(KVM_REG_ARM_TIMER_CTL, uindices))
return -EFAULT;
uindices++;
if (put_user(KVM_REG_ARM_TIMER_CNT, uindices))
return -EFAULT;
uindices++;
if (put_user(KVM_REG_ARM_TIMER_CVAL, uindices))
return -EFAULT;
return 0;
}
#endif
static int set_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
{
void __user *uaddr = (void __user *)(long)reg->addr;
u64 val;
int ret;
ret = copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id));
if (ret != 0)
return ret;
return kvm_arm_timer_set_reg(vcpu, reg->id, val);
}
static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
{
void __user *uaddr = (void __user *)(long)reg->addr;
u64 val;
val = kvm_arm_timer_get_reg(vcpu, reg->id);
return copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id));
}
static unsigned long num_core_regs(void) static unsigned long num_core_regs(void)
{ {
return sizeof(struct kvm_regs) / sizeof(u32); return sizeof(struct kvm_regs) / sizeof(u32);
...@@ -121,7 +198,8 @@ static unsigned long num_core_regs(void) ...@@ -121,7 +198,8 @@ static unsigned long num_core_regs(void)
*/ */
unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu)
{ {
return num_core_regs() + kvm_arm_num_coproc_regs(vcpu); return num_core_regs() + kvm_arm_num_coproc_regs(vcpu)
+ NUM_TIMER_REGS;
} }
/** /**
...@@ -133,6 +211,7 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) ...@@ -133,6 +211,7 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
{ {
unsigned int i; unsigned int i;
const u64 core_reg = KVM_REG_ARM | KVM_REG_SIZE_U32 | KVM_REG_ARM_CORE; const u64 core_reg = KVM_REG_ARM | KVM_REG_SIZE_U32 | KVM_REG_ARM_CORE;
int ret;
for (i = 0; i < sizeof(struct kvm_regs)/sizeof(u32); i++) { for (i = 0; i < sizeof(struct kvm_regs)/sizeof(u32); i++) {
if (put_user(core_reg | i, uindices)) if (put_user(core_reg | i, uindices))
...@@ -140,6 +219,11 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) ...@@ -140,6 +219,11 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
uindices++; uindices++;
} }
ret = copy_timer_indices(vcpu, uindices);
if (ret)
return ret;
uindices += NUM_TIMER_REGS;
return kvm_arm_copy_coproc_indices(vcpu, uindices); return kvm_arm_copy_coproc_indices(vcpu, uindices);
} }
...@@ -153,6 +237,9 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) ...@@ -153,6 +237,9 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE)
return get_core_reg(vcpu, reg); return get_core_reg(vcpu, reg);
if (is_timer_reg(reg->id))
return get_timer_reg(vcpu, reg);
return kvm_arm_coproc_get_reg(vcpu, reg); return kvm_arm_coproc_get_reg(vcpu, reg);
} }
...@@ -166,6 +253,9 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) ...@@ -166,6 +253,9 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE)
return set_core_reg(vcpu, reg); return set_core_reg(vcpu, reg);
if (is_timer_reg(reg->id))
return set_timer_reg(vcpu, reg);
return kvm_arm_coproc_set_reg(vcpu, reg); return kvm_arm_coproc_set_reg(vcpu, reg);
} }
......
...@@ -26,8 +26,6 @@ ...@@ -26,8 +26,6 @@
#include "trace.h" #include "trace.h"
#include "trace.h"
typedef int (*exit_handle_fn)(struct kvm_vcpu *, struct kvm_run *); typedef int (*exit_handle_fn)(struct kvm_vcpu *, struct kvm_run *);
static int handle_svc_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run) static int handle_svc_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
......
...@@ -667,14 +667,16 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, ...@@ -667,14 +667,16 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT; gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
} else { } else {
/* /*
* Pages belonging to VMAs not aligned to the PMD mapping * Pages belonging to memslots that don't have the same
* granularity cannot be mapped using block descriptors even * alignment for userspace and IPA cannot be mapped using
* if the pages belong to a THP for the process, because the * block descriptors even if the pages belong to a THP for
* stage-2 block descriptor will cover more than a single THP * the process, because the stage-2 block descriptor will
* and we loose atomicity for unmapping, updates, and splits * cover more than a single THP and we loose atomicity for
* of the THP or other pages in the stage-2 block range. * unmapping, updates, and splits of the THP or other pages
* in the stage-2 block range.
*/ */
if (vma->vm_start & ~PMD_MASK) if ((memslot->userspace_addr & ~PMD_MASK) !=
((memslot->base_gfn << PAGE_SHIFT) & ~PMD_MASK))
force_pte = true; force_pte = true;
} }
up_read(&current->mm->mmap_sem); up_read(&current->mm->mmap_sem);
...@@ -916,9 +918,9 @@ int kvm_mmu_init(void) ...@@ -916,9 +918,9 @@ int kvm_mmu_init(void)
{ {
int err; int err;
hyp_idmap_start = virt_to_phys(__hyp_idmap_text_start); hyp_idmap_start = kvm_virt_to_phys(__hyp_idmap_text_start);
hyp_idmap_end = virt_to_phys(__hyp_idmap_text_end); hyp_idmap_end = kvm_virt_to_phys(__hyp_idmap_text_end);
hyp_idmap_vector = virt_to_phys(__kvm_hyp_init); hyp_idmap_vector = kvm_virt_to_phys(__kvm_hyp_init);
if ((hyp_idmap_start ^ hyp_idmap_end) & PAGE_MASK) { if ((hyp_idmap_start ^ hyp_idmap_end) & PAGE_MASK) {
/* /*
...@@ -945,7 +947,7 @@ int kvm_mmu_init(void) ...@@ -945,7 +947,7 @@ int kvm_mmu_init(void)
*/ */
kvm_flush_dcache_to_poc(init_bounce_page, len); kvm_flush_dcache_to_poc(init_bounce_page, len);
phys_base = virt_to_phys(init_bounce_page); phys_base = kvm_virt_to_phys(init_bounce_page);
hyp_idmap_vector += phys_base - hyp_idmap_start; hyp_idmap_vector += phys_base - hyp_idmap_start;
hyp_idmap_start = phys_base; hyp_idmap_start = phys_base;
hyp_idmap_end = phys_base + len; hyp_idmap_end = phys_base + len;
......
...@@ -54,15 +54,15 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu) ...@@ -54,15 +54,15 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
} }
} }
if (!vcpu) /*
* Make sure the caller requested a valid CPU and that the CPU is
* turned off.
*/
if (!vcpu || !vcpu->arch.pause)
return KVM_PSCI_RET_INVAL; return KVM_PSCI_RET_INVAL;
target_pc = *vcpu_reg(source_vcpu, 2); target_pc = *vcpu_reg(source_vcpu, 2);
wq = kvm_arch_vcpu_wq(vcpu);
if (!waitqueue_active(wq))
return KVM_PSCI_RET_INVAL;
kvm_reset_vcpu(vcpu); kvm_reset_vcpu(vcpu);
/* Gracefully handle Thumb2 entry point */ /* Gracefully handle Thumb2 entry point */
...@@ -79,6 +79,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu) ...@@ -79,6 +79,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
vcpu->arch.pause = false; vcpu->arch.pause = false;
smp_mb(); /* Make sure the above is visible */ smp_mb(); /* Make sure the above is visible */
wq = kvm_arch_vcpu_wq(vcpu);
wake_up_interruptible(wq); wake_up_interruptible(wq);
return KVM_PSCI_RET_SUCCESS; return KVM_PSCI_RET_SUCCESS;
......
...@@ -26,7 +26,12 @@ ...@@ -26,7 +26,12 @@
#include <asm/kvm_asm.h> #include <asm/kvm_asm.h>
#include <asm/kvm_mmio.h> #include <asm/kvm_mmio.h>
#define KVM_MAX_VCPUS 4 #if defined(CONFIG_KVM_ARM_MAX_VCPUS)
#define KVM_MAX_VCPUS CONFIG_KVM_ARM_MAX_VCPUS
#else
#define KVM_MAX_VCPUS 0
#endif
#define KVM_USER_MEM_SLOTS 32 #define KVM_USER_MEM_SLOTS 32
#define KVM_PRIVATE_MEM_SLOTS 4 #define KVM_PRIVATE_MEM_SLOTS 4
#define KVM_COALESCED_MMIO_PAGE_OFFSET 1 #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
......
...@@ -136,6 +136,7 @@ static inline void coherent_icache_guest_page(struct kvm *kvm, hva_t hva, ...@@ -136,6 +136,7 @@ static inline void coherent_icache_guest_page(struct kvm *kvm, hva_t hva,
} }
#define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l)) #define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l))
#define kvm_virt_to_phys(x) __virt_to_phys((unsigned long)(x))
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* __ARM64_KVM_MMU_H__ */ #endif /* __ARM64_KVM_MMU_H__ */
...@@ -55,8 +55,9 @@ struct kvm_regs { ...@@ -55,8 +55,9 @@ struct kvm_regs {
#define KVM_ARM_TARGET_AEM_V8 0 #define KVM_ARM_TARGET_AEM_V8 0
#define KVM_ARM_TARGET_FOUNDATION_V8 1 #define KVM_ARM_TARGET_FOUNDATION_V8 1
#define KVM_ARM_TARGET_CORTEX_A57 2 #define KVM_ARM_TARGET_CORTEX_A57 2
#define KVM_ARM_TARGET_XGENE_POTENZA 3
#define KVM_ARM_NUM_TARGETS 3 #define KVM_ARM_NUM_TARGETS 4
/* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */ /* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */
#define KVM_ARM_DEVICE_TYPE_SHIFT 0 #define KVM_ARM_DEVICE_TYPE_SHIFT 0
...@@ -129,6 +130,24 @@ struct kvm_arch_memory_slot { ...@@ -129,6 +130,24 @@ struct kvm_arch_memory_slot {
#define KVM_REG_ARM64_SYSREG_OP2_MASK 0x0000000000000007 #define KVM_REG_ARM64_SYSREG_OP2_MASK 0x0000000000000007
#define KVM_REG_ARM64_SYSREG_OP2_SHIFT 0 #define KVM_REG_ARM64_SYSREG_OP2_SHIFT 0
#define ARM64_SYS_REG_SHIFT_MASK(x,n) \
(((x) << KVM_REG_ARM64_SYSREG_ ## n ## _SHIFT) & \
KVM_REG_ARM64_SYSREG_ ## n ## _MASK)
#define __ARM64_SYS_REG(op0,op1,crn,crm,op2) \
(KVM_REG_ARM64 | KVM_REG_ARM64_SYSREG | \
ARM64_SYS_REG_SHIFT_MASK(op0, OP0) | \
ARM64_SYS_REG_SHIFT_MASK(op1, OP1) | \
ARM64_SYS_REG_SHIFT_MASK(crn, CRN) | \
ARM64_SYS_REG_SHIFT_MASK(crm, CRM) | \
ARM64_SYS_REG_SHIFT_MASK(op2, OP2))
#define ARM64_SYS_REG(...) (__ARM64_SYS_REG(__VA_ARGS__) | KVM_REG_SIZE_U64)
#define KVM_REG_ARM_TIMER_CTL ARM64_SYS_REG(3, 3, 14, 3, 1)
#define KVM_REG_ARM_TIMER_CNT ARM64_SYS_REG(3, 3, 14, 3, 2)
#define KVM_REG_ARM_TIMER_CVAL ARM64_SYS_REG(3, 3, 14, 0, 2)
/* KVM_IRQ_LINE irq field index values */ /* KVM_IRQ_LINE irq field index values */
#define KVM_ARM_IRQ_TYPE_SHIFT 24 #define KVM_ARM_IRQ_TYPE_SHIFT 24
#define KVM_ARM_IRQ_TYPE_MASK 0xff #define KVM_ARM_IRQ_TYPE_MASK 0xff
......
...@@ -36,6 +36,17 @@ config KVM_ARM_HOST ...@@ -36,6 +36,17 @@ config KVM_ARM_HOST
---help--- ---help---
Provides host support for ARM processors. Provides host support for ARM processors.
config KVM_ARM_MAX_VCPUS
int "Number maximum supported virtual CPUs per VM"
depends on KVM_ARM_HOST
default 4
help
Static number of max supported virtual CPUs per VM.
If you choose a high number, the vcpu structures will be quite
large, so only choose a reasonable number that you expect to
actually use.
config KVM_ARM_VGIC config KVM_ARM_VGIC
bool bool
depends on KVM_ARM_HOST && OF depends on KVM_ARM_HOST && OF
......
...@@ -207,20 +207,26 @@ int __attribute_const__ kvm_target_cpu(void) ...@@ -207,20 +207,26 @@ int __attribute_const__ kvm_target_cpu(void)
unsigned long implementor = read_cpuid_implementor(); unsigned long implementor = read_cpuid_implementor();
unsigned long part_number = read_cpuid_part_number(); unsigned long part_number = read_cpuid_part_number();
if (implementor != ARM_CPU_IMP_ARM) switch (implementor) {
return -EINVAL; case ARM_CPU_IMP_ARM:
switch (part_number) {
case ARM_CPU_PART_AEM_V8:
return KVM_ARM_TARGET_AEM_V8;
case ARM_CPU_PART_FOUNDATION:
return KVM_ARM_TARGET_FOUNDATION_V8;
case ARM_CPU_PART_CORTEX_A57:
return KVM_ARM_TARGET_CORTEX_A57;
};
break;
case ARM_CPU_IMP_APM:
switch (part_number) {
case APM_CPU_PART_POTENZA:
return KVM_ARM_TARGET_XGENE_POTENZA;
};
break;
};
switch (part_number) { return -EINVAL;
case ARM_CPU_PART_AEM_V8:
return KVM_ARM_TARGET_AEM_V8;
case ARM_CPU_PART_FOUNDATION:
return KVM_ARM_TARGET_FOUNDATION_V8;
case ARM_CPU_PART_CORTEX_A57:
/* Currently handled by the generic backend */
return KVM_ARM_TARGET_CORTEX_A57;
default:
return -EINVAL;
}
} }
int kvm_vcpu_set_target(struct kvm_vcpu *vcpu, int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
......
...@@ -39,9 +39,6 @@ static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run) ...@@ -39,9 +39,6 @@ static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run) static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
{ {
if (kvm_psci_call(vcpu))
return 1;
kvm_inject_undefined(vcpu); kvm_inject_undefined(vcpu);
return 1; return 1;
} }
......
...@@ -90,6 +90,9 @@ static int __init sys_reg_genericv8_init(void) ...@@ -90,6 +90,9 @@ static int __init sys_reg_genericv8_init(void)
&genericv8_target_table); &genericv8_target_table);
kvm_register_target_sys_reg_table(KVM_ARM_TARGET_CORTEX_A57, kvm_register_target_sys_reg_table(KVM_ARM_TARGET_CORTEX_A57,
&genericv8_target_table); &genericv8_target_table);
kvm_register_target_sys_reg_table(KVM_ARM_TARGET_XGENE_POTENZA,
&genericv8_target_table);
return 0; return 0;
} }
late_initcall(sys_reg_genericv8_init); late_initcall(sys_reg_genericv8_init);
...@@ -702,7 +702,7 @@ static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) ...@@ -702,7 +702,7 @@ static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
out: out:
srcu_read_unlock(&vcpu->kvm->srcu, idx); srcu_read_unlock(&vcpu->kvm->srcu, idx);
if (r > 0) { if (r > 0) {
kvm_resched(vcpu); cond_resched();
idx = srcu_read_lock(&vcpu->kvm->srcu); idx = srcu_read_lock(&vcpu->kvm->srcu);
goto again; goto again;
} }
......
...@@ -1352,7 +1352,7 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc) ...@@ -1352,7 +1352,7 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
kvm_guest_exit(); kvm_guest_exit();
preempt_enable(); preempt_enable();
kvm_resched(vcpu); cond_resched();
spin_lock(&vc->lock); spin_lock(&vc->lock);
now = get_tb(); now = get_tb();
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
#define SIGP_SENSE 1 #define SIGP_SENSE 1
#define SIGP_EXTERNAL_CALL 2 #define SIGP_EXTERNAL_CALL 2
#define SIGP_EMERGENCY_SIGNAL 3 #define SIGP_EMERGENCY_SIGNAL 3
#define SIGP_START 4
#define SIGP_STOP 5 #define SIGP_STOP 5
#define SIGP_RESTART 6 #define SIGP_RESTART 6
#define SIGP_STOP_AND_STORE_STATUS 9 #define SIGP_STOP_AND_STORE_STATUS 9
...@@ -12,6 +13,7 @@ ...@@ -12,6 +13,7 @@
#define SIGP_SET_PREFIX 13 #define SIGP_SET_PREFIX 13
#define SIGP_STORE_STATUS_AT_ADDRESS 14 #define SIGP_STORE_STATUS_AT_ADDRESS 14
#define SIGP_SET_ARCHITECTURE 18 #define SIGP_SET_ARCHITECTURE 18
#define SIGP_COND_EMERGENCY_SIGNAL 19
#define SIGP_SENSE_RUNNING 21 #define SIGP_SENSE_RUNNING 21
/* SIGP condition codes */ /* SIGP condition codes */
......
...@@ -121,7 +121,7 @@ static int __diag_virtio_hypercall(struct kvm_vcpu *vcpu) ...@@ -121,7 +121,7 @@ static int __diag_virtio_hypercall(struct kvm_vcpu *vcpu)
* - gpr 4 contains the index on the bus (optionally) * - gpr 4 contains the index on the bus (optionally)
*/ */
ret = kvm_io_bus_write_cookie(vcpu->kvm, KVM_VIRTIO_CCW_NOTIFY_BUS, ret = kvm_io_bus_write_cookie(vcpu->kvm, KVM_VIRTIO_CCW_NOTIFY_BUS,
vcpu->run->s.regs.gprs[2], vcpu->run->s.regs.gprs[2] & 0xffffffff,
8, &vcpu->run->s.regs.gprs[3], 8, &vcpu->run->s.regs.gprs[3],
vcpu->run->s.regs.gprs[4]); vcpu->run->s.regs.gprs[4]);
...@@ -137,7 +137,7 @@ static int __diag_virtio_hypercall(struct kvm_vcpu *vcpu) ...@@ -137,7 +137,7 @@ static int __diag_virtio_hypercall(struct kvm_vcpu *vcpu)
int kvm_s390_handle_diag(struct kvm_vcpu *vcpu) int kvm_s390_handle_diag(struct kvm_vcpu *vcpu)
{ {
int code = (vcpu->arch.sie_block->ipb & 0xfff0000) >> 16; int code = kvm_s390_get_base_disp_rs(vcpu) & 0xffff;
if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE)
return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);
......
...@@ -732,14 +732,16 @@ static int vcpu_post_run(struct kvm_vcpu *vcpu, int exit_reason) ...@@ -732,14 +732,16 @@ static int vcpu_post_run(struct kvm_vcpu *vcpu, int exit_reason)
if (exit_reason >= 0) { if (exit_reason >= 0) {
rc = 0; rc = 0;
} else if (kvm_is_ucontrol(vcpu->kvm)) {
vcpu->run->exit_reason = KVM_EXIT_S390_UCONTROL;
vcpu->run->s390_ucontrol.trans_exc_code =
current->thread.gmap_addr;
vcpu->run->s390_ucontrol.pgm_code = 0x10;
rc = -EREMOTE;
} else { } else {
if (kvm_is_ucontrol(vcpu->kvm)) { VCPU_EVENT(vcpu, 3, "%s", "fault in sie instruction");
rc = SIE_INTERCEPT_UCONTROL; trace_kvm_s390_sie_fault(vcpu);
} else { rc = kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);
VCPU_EVENT(vcpu, 3, "%s", "fault in sie instruction");
trace_kvm_s390_sie_fault(vcpu);
rc = kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);
}
} }
memcpy(&vcpu->run->s.regs.gprs[14], &vcpu->arch.sie_block->gg14, 16); memcpy(&vcpu->run->s.regs.gprs[14], &vcpu->arch.sie_block->gg14, 16);
...@@ -833,16 +835,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) ...@@ -833,16 +835,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
rc = -EINTR; rc = -EINTR;
} }
#ifdef CONFIG_KVM_S390_UCONTROL
if (rc == SIE_INTERCEPT_UCONTROL) {
kvm_run->exit_reason = KVM_EXIT_S390_UCONTROL;
kvm_run->s390_ucontrol.trans_exc_code =
current->thread.gmap_addr;
kvm_run->s390_ucontrol.pgm_code = 0x10;
rc = 0;
}
#endif
if (rc == -EOPNOTSUPP) { if (rc == -EOPNOTSUPP) {
/* intercept cannot be handled in-kernel, prepare kvm-run */ /* intercept cannot be handled in-kernel, prepare kvm-run */
kvm_run->exit_reason = KVM_EXIT_S390_SIEIC; kvm_run->exit_reason = KVM_EXIT_S390_SIEIC;
...@@ -885,10 +877,11 @@ static int __guestcopy(struct kvm_vcpu *vcpu, u64 guestdest, void *from, ...@@ -885,10 +877,11 @@ static int __guestcopy(struct kvm_vcpu *vcpu, u64 guestdest, void *from,
* KVM_S390_STORE_STATUS_NOADDR: -> 0x1200 on 64 bit * KVM_S390_STORE_STATUS_NOADDR: -> 0x1200 on 64 bit
* KVM_S390_STORE_STATUS_PREFIXED: -> prefix * KVM_S390_STORE_STATUS_PREFIXED: -> prefix
*/ */
int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr) int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long addr)
{ {
unsigned char archmode = 1; unsigned char archmode = 1;
int prefix; int prefix;
u64 clkcomp;
if (addr == KVM_S390_STORE_STATUS_NOADDR) { if (addr == KVM_S390_STORE_STATUS_NOADDR) {
if (copy_to_guest_absolute(vcpu, 163ul, &archmode, 1)) if (copy_to_guest_absolute(vcpu, 163ul, &archmode, 1))
...@@ -903,15 +896,6 @@ int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr) ...@@ -903,15 +896,6 @@ int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr)
} else } else
prefix = 0; prefix = 0;
/*
* The guest FPRS and ACRS are in the host FPRS/ACRS due to the lazy
* copying in vcpu load/put. Lets update our copies before we save
* it into the save area
*/
save_fp_ctl(&vcpu->arch.guest_fpregs.fpc);
save_fp_regs(vcpu->arch.guest_fpregs.fprs);
save_access_regs(vcpu->run->s.regs.acrs);
if (__guestcopy(vcpu, addr + offsetof(struct save_area, fp_regs), if (__guestcopy(vcpu, addr + offsetof(struct save_area, fp_regs),
vcpu->arch.guest_fpregs.fprs, 128, prefix)) vcpu->arch.guest_fpregs.fprs, 128, prefix))
return -EFAULT; return -EFAULT;
...@@ -941,8 +925,9 @@ int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr) ...@@ -941,8 +925,9 @@ int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr)
&vcpu->arch.sie_block->cputm, 8, prefix)) &vcpu->arch.sie_block->cputm, 8, prefix))
return -EFAULT; return -EFAULT;
clkcomp = vcpu->arch.sie_block->ckc >> 8;
if (__guestcopy(vcpu, addr + offsetof(struct save_area, clk_cmp), if (__guestcopy(vcpu, addr + offsetof(struct save_area, clk_cmp),
&vcpu->arch.sie_block->ckc, 8, prefix)) &clkcomp, 8, prefix))
return -EFAULT; return -EFAULT;
if (__guestcopy(vcpu, addr + offsetof(struct save_area, acc_regs), if (__guestcopy(vcpu, addr + offsetof(struct save_area, acc_regs),
...@@ -956,6 +941,20 @@ int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr) ...@@ -956,6 +941,20 @@ int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr)
return 0; return 0;
} }
int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr)
{
/*
* The guest FPRS and ACRS are in the host FPRS/ACRS due to the lazy
* copying in vcpu load/put. Lets update our copies before we save
* it into the save area
*/
save_fp_ctl(&vcpu->arch.guest_fpregs.fpc);
save_fp_regs(vcpu->arch.guest_fpregs.fprs);
save_access_regs(vcpu->run->s.regs.acrs);
return kvm_s390_store_status_unloaded(vcpu, addr);
}
static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu, static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
struct kvm_enable_cap *cap) struct kvm_enable_cap *cap)
{ {
......
...@@ -19,16 +19,11 @@ ...@@ -19,16 +19,11 @@
#include <linux/kvm.h> #include <linux/kvm.h>
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
/* The current code can have up to 256 pages for virtio */
#define VIRTIODESCSPACE (256ul * 4096ul)
typedef int (*intercept_handler_t)(struct kvm_vcpu *vcpu); typedef int (*intercept_handler_t)(struct kvm_vcpu *vcpu);
/* declare vfacilities extern */ /* declare vfacilities extern */
extern unsigned long *vfacilities; extern unsigned long *vfacilities;
/* negativ values are error codes, positive values for internal conditions */
#define SIE_INTERCEPT_UCONTROL (1<<0)
int kvm_handle_sie_intercept(struct kvm_vcpu *vcpu); int kvm_handle_sie_intercept(struct kvm_vcpu *vcpu);
#define VM_EVENT(d_kvm, d_loglevel, d_string, d_args...)\ #define VM_EVENT(d_kvm, d_loglevel, d_string, d_args...)\
...@@ -133,7 +128,6 @@ int __must_check kvm_s390_inject_vm(struct kvm *kvm, ...@@ -133,7 +128,6 @@ int __must_check kvm_s390_inject_vm(struct kvm *kvm,
int __must_check kvm_s390_inject_vcpu(struct kvm_vcpu *vcpu, int __must_check kvm_s390_inject_vcpu(struct kvm_vcpu *vcpu,
struct kvm_s390_interrupt *s390int); struct kvm_s390_interrupt *s390int);
int __must_check kvm_s390_inject_program_int(struct kvm_vcpu *vcpu, u16 code); int __must_check kvm_s390_inject_program_int(struct kvm_vcpu *vcpu, u16 code);
int __must_check kvm_s390_inject_sigp_stop(struct kvm_vcpu *vcpu, int action);
struct kvm_s390_interrupt_info *kvm_s390_get_io_int(struct kvm *kvm, struct kvm_s390_interrupt_info *kvm_s390_get_io_int(struct kvm *kvm,
u64 cr6, u64 schid); u64 cr6, u64 schid);
...@@ -150,8 +144,8 @@ int kvm_s390_handle_eb(struct kvm_vcpu *vcpu); ...@@ -150,8 +144,8 @@ int kvm_s390_handle_eb(struct kvm_vcpu *vcpu);
int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu); int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu);
/* implemented in kvm-s390.c */ /* implemented in kvm-s390.c */
int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long addr);
unsigned long addr); int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr);
void s390_vcpu_block(struct kvm_vcpu *vcpu); void s390_vcpu_block(struct kvm_vcpu *vcpu);
void s390_vcpu_unblock(struct kvm_vcpu *vcpu); void s390_vcpu_unblock(struct kvm_vcpu *vcpu);
void exit_sie(struct kvm_vcpu *vcpu); void exit_sie(struct kvm_vcpu *vcpu);
......
...@@ -197,7 +197,7 @@ static int handle_tpi(struct kvm_vcpu *vcpu) ...@@ -197,7 +197,7 @@ static int handle_tpi(struct kvm_vcpu *vcpu)
if (addr & 3) if (addr & 3)
return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION); return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
cc = 0; cc = 0;
inti = kvm_s390_get_io_int(vcpu->kvm, vcpu->run->s.regs.crs[6], 0); inti = kvm_s390_get_io_int(vcpu->kvm, vcpu->arch.sie_block->gcr[6], 0);
if (!inti) if (!inti)
goto no_interrupt; goto no_interrupt;
cc = 1; cc = 1;
...@@ -638,7 +638,6 @@ static int handle_pfmf(struct kvm_vcpu *vcpu) ...@@ -638,7 +638,6 @@ static int handle_pfmf(struct kvm_vcpu *vcpu)
static const intercept_handler_t b9_handlers[256] = { static const intercept_handler_t b9_handlers[256] = {
[0x8d] = handle_epsw, [0x8d] = handle_epsw,
[0x9c] = handle_io_inst,
[0xaf] = handle_pfmf, [0xaf] = handle_pfmf,
}; };
...@@ -731,7 +730,6 @@ static int handle_lctlg(struct kvm_vcpu *vcpu) ...@@ -731,7 +730,6 @@ static int handle_lctlg(struct kvm_vcpu *vcpu)
static const intercept_handler_t eb_handlers[256] = { static const intercept_handler_t eb_handlers[256] = {
[0x2f] = handle_lctlg, [0x2f] = handle_lctlg,
[0x8a] = handle_io_inst,
}; };
int kvm_s390_handle_eb(struct kvm_vcpu *vcpu) int kvm_s390_handle_eb(struct kvm_vcpu *vcpu)
......
/* /*
* handling interprocessor communication * handling interprocessor communication
* *
* Copyright IBM Corp. 2008, 2009 * Copyright IBM Corp. 2008, 2013
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License (version 2 only) * it under the terms of the GNU General Public License (version 2 only)
...@@ -89,6 +89,37 @@ static int __sigp_emergency(struct kvm_vcpu *vcpu, u16 cpu_addr) ...@@ -89,6 +89,37 @@ static int __sigp_emergency(struct kvm_vcpu *vcpu, u16 cpu_addr)
return rc; return rc;
} }
static int __sigp_conditional_emergency(struct kvm_vcpu *vcpu, u16 cpu_addr,
u16 asn, u64 *reg)
{
struct kvm_vcpu *dst_vcpu = NULL;
const u64 psw_int_mask = PSW_MASK_IO | PSW_MASK_EXT;
u16 p_asn, s_asn;
psw_t *psw;
u32 flags;
if (cpu_addr < KVM_MAX_VCPUS)
dst_vcpu = kvm_get_vcpu(vcpu->kvm, cpu_addr);
if (!dst_vcpu)
return SIGP_CC_NOT_OPERATIONAL;
flags = atomic_read(&dst_vcpu->arch.sie_block->cpuflags);
psw = &dst_vcpu->arch.sie_block->gpsw;
p_asn = dst_vcpu->arch.sie_block->gcr[4] & 0xffff; /* Primary ASN */
s_asn = dst_vcpu->arch.sie_block->gcr[3] & 0xffff; /* Secondary ASN */
/* Deliver the emergency signal? */
if (!(flags & CPUSTAT_STOPPED)
|| (psw->mask & psw_int_mask) != psw_int_mask
|| ((flags & CPUSTAT_WAIT) && psw->addr != 0)
|| (!(flags & CPUSTAT_WAIT) && (asn == p_asn || asn == s_asn))) {
return __sigp_emergency(vcpu, cpu_addr);
} else {
*reg &= 0xffffffff00000000UL;
*reg |= SIGP_STATUS_INCORRECT_STATE;
return SIGP_CC_STATUS_STORED;
}
}
static int __sigp_external_call(struct kvm_vcpu *vcpu, u16 cpu_addr) static int __sigp_external_call(struct kvm_vcpu *vcpu, u16 cpu_addr)
{ {
struct kvm_s390_float_interrupt *fi = &vcpu->kvm->arch.float_int; struct kvm_s390_float_interrupt *fi = &vcpu->kvm->arch.float_int;
...@@ -130,6 +161,7 @@ static int __sigp_external_call(struct kvm_vcpu *vcpu, u16 cpu_addr) ...@@ -130,6 +161,7 @@ static int __sigp_external_call(struct kvm_vcpu *vcpu, u16 cpu_addr)
static int __inject_sigp_stop(struct kvm_s390_local_interrupt *li, int action) static int __inject_sigp_stop(struct kvm_s390_local_interrupt *li, int action)
{ {
struct kvm_s390_interrupt_info *inti; struct kvm_s390_interrupt_info *inti;
int rc = SIGP_CC_ORDER_CODE_ACCEPTED;
inti = kzalloc(sizeof(*inti), GFP_ATOMIC); inti = kzalloc(sizeof(*inti), GFP_ATOMIC);
if (!inti) if (!inti)
...@@ -139,6 +171,8 @@ static int __inject_sigp_stop(struct kvm_s390_local_interrupt *li, int action) ...@@ -139,6 +171,8 @@ static int __inject_sigp_stop(struct kvm_s390_local_interrupt *li, int action)
spin_lock_bh(&li->lock); spin_lock_bh(&li->lock);
if ((atomic_read(li->cpuflags) & CPUSTAT_STOPPED)) { if ((atomic_read(li->cpuflags) & CPUSTAT_STOPPED)) {
kfree(inti); kfree(inti);
if ((action & ACTION_STORE_ON_STOP) != 0)
rc = -ESHUTDOWN;
goto out; goto out;
} }
list_add_tail(&inti->list, &li->list); list_add_tail(&inti->list, &li->list);
...@@ -150,7 +184,7 @@ static int __inject_sigp_stop(struct kvm_s390_local_interrupt *li, int action) ...@@ -150,7 +184,7 @@ static int __inject_sigp_stop(struct kvm_s390_local_interrupt *li, int action)
out: out:
spin_unlock_bh(&li->lock); spin_unlock_bh(&li->lock);
return SIGP_CC_ORDER_CODE_ACCEPTED; return rc;
} }
static int __sigp_stop(struct kvm_vcpu *vcpu, u16 cpu_addr, int action) static int __sigp_stop(struct kvm_vcpu *vcpu, u16 cpu_addr, int action)
...@@ -174,13 +208,17 @@ static int __sigp_stop(struct kvm_vcpu *vcpu, u16 cpu_addr, int action) ...@@ -174,13 +208,17 @@ static int __sigp_stop(struct kvm_vcpu *vcpu, u16 cpu_addr, int action)
unlock: unlock:
spin_unlock(&fi->lock); spin_unlock(&fi->lock);
VCPU_EVENT(vcpu, 4, "sent sigp stop to cpu %x", cpu_addr); VCPU_EVENT(vcpu, 4, "sent sigp stop to cpu %x", cpu_addr);
return rc;
}
int kvm_s390_inject_sigp_stop(struct kvm_vcpu *vcpu, int action) if ((action & ACTION_STORE_ON_STOP) != 0 && rc == -ESHUTDOWN) {
{ /* If the CPU has already been stopped, we still have
struct kvm_s390_local_interrupt *li = &vcpu->arch.local_int; * to save the status when doing stop-and-store. This
return __inject_sigp_stop(li, action); * has to be done after unlocking all spinlocks. */
struct kvm_vcpu *dst_vcpu = kvm_get_vcpu(vcpu->kvm, cpu_addr);
rc = kvm_s390_store_status_unloaded(dst_vcpu,
KVM_S390_STORE_STATUS_NOADDR);
}
return rc;
} }
static int __sigp_set_arch(struct kvm_vcpu *vcpu, u32 parameter) static int __sigp_set_arch(struct kvm_vcpu *vcpu, u32 parameter)
...@@ -262,6 +300,37 @@ static int __sigp_set_prefix(struct kvm_vcpu *vcpu, u16 cpu_addr, u32 address, ...@@ -262,6 +300,37 @@ static int __sigp_set_prefix(struct kvm_vcpu *vcpu, u16 cpu_addr, u32 address,
return rc; return rc;
} }
static int __sigp_store_status_at_addr(struct kvm_vcpu *vcpu, u16 cpu_id,
u32 addr, u64 *reg)
{
struct kvm_vcpu *dst_vcpu = NULL;
int flags;
int rc;
if (cpu_id < KVM_MAX_VCPUS)
dst_vcpu = kvm_get_vcpu(vcpu->kvm, cpu_id);
if (!dst_vcpu)
return SIGP_CC_NOT_OPERATIONAL;
spin_lock_bh(&dst_vcpu->arch.local_int.lock);
flags = atomic_read(dst_vcpu->arch.local_int.cpuflags);
spin_unlock_bh(&dst_vcpu->arch.local_int.lock);
if (!(flags & CPUSTAT_STOPPED)) {
*reg &= 0xffffffff00000000UL;
*reg |= SIGP_STATUS_INCORRECT_STATE;
return SIGP_CC_STATUS_STORED;
}
addr &= 0x7ffffe00;
rc = kvm_s390_store_status_unloaded(dst_vcpu, addr);
if (rc == -EFAULT) {
*reg &= 0xffffffff00000000UL;
*reg |= SIGP_STATUS_INVALID_PARAMETER;
rc = SIGP_CC_STATUS_STORED;
}
return rc;
}
static int __sigp_sense_running(struct kvm_vcpu *vcpu, u16 cpu_addr, static int __sigp_sense_running(struct kvm_vcpu *vcpu, u16 cpu_addr,
u64 *reg) u64 *reg)
{ {
...@@ -294,7 +363,8 @@ static int __sigp_sense_running(struct kvm_vcpu *vcpu, u16 cpu_addr, ...@@ -294,7 +363,8 @@ static int __sigp_sense_running(struct kvm_vcpu *vcpu, u16 cpu_addr,
return rc; return rc;
} }
static int __sigp_restart(struct kvm_vcpu *vcpu, u16 cpu_addr) /* Test whether the destination CPU is available and not busy */
static int sigp_check_callable(struct kvm_vcpu *vcpu, u16 cpu_addr)
{ {
struct kvm_s390_float_interrupt *fi = &vcpu->kvm->arch.float_int; struct kvm_s390_float_interrupt *fi = &vcpu->kvm->arch.float_int;
struct kvm_s390_local_interrupt *li; struct kvm_s390_local_interrupt *li;
...@@ -313,9 +383,6 @@ static int __sigp_restart(struct kvm_vcpu *vcpu, u16 cpu_addr) ...@@ -313,9 +383,6 @@ static int __sigp_restart(struct kvm_vcpu *vcpu, u16 cpu_addr)
spin_lock_bh(&li->lock); spin_lock_bh(&li->lock);
if (li->action_bits & ACTION_STOP_ON_STOP) if (li->action_bits & ACTION_STOP_ON_STOP)
rc = SIGP_CC_BUSY; rc = SIGP_CC_BUSY;
else
VCPU_EVENT(vcpu, 4, "sigp restart %x to handle userspace",
cpu_addr);
spin_unlock_bh(&li->lock); spin_unlock_bh(&li->lock);
out: out:
spin_unlock(&fi->lock); spin_unlock(&fi->lock);
...@@ -366,6 +433,10 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu) ...@@ -366,6 +433,10 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu)
rc = __sigp_stop(vcpu, cpu_addr, ACTION_STORE_ON_STOP | rc = __sigp_stop(vcpu, cpu_addr, ACTION_STORE_ON_STOP |
ACTION_STOP_ON_STOP); ACTION_STOP_ON_STOP);
break; break;
case SIGP_STORE_STATUS_AT_ADDRESS:
rc = __sigp_store_status_at_addr(vcpu, cpu_addr, parameter,
&vcpu->run->s.regs.gprs[r1]);
break;
case SIGP_SET_ARCHITECTURE: case SIGP_SET_ARCHITECTURE:
vcpu->stat.instruction_sigp_arch++; vcpu->stat.instruction_sigp_arch++;
rc = __sigp_set_arch(vcpu, parameter); rc = __sigp_set_arch(vcpu, parameter);
...@@ -375,17 +446,31 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu) ...@@ -375,17 +446,31 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu)
rc = __sigp_set_prefix(vcpu, cpu_addr, parameter, rc = __sigp_set_prefix(vcpu, cpu_addr, parameter,
&vcpu->run->s.regs.gprs[r1]); &vcpu->run->s.regs.gprs[r1]);
break; break;
case SIGP_COND_EMERGENCY_SIGNAL:
rc = __sigp_conditional_emergency(vcpu, cpu_addr, parameter,
&vcpu->run->s.regs.gprs[r1]);
break;
case SIGP_SENSE_RUNNING: case SIGP_SENSE_RUNNING:
vcpu->stat.instruction_sigp_sense_running++; vcpu->stat.instruction_sigp_sense_running++;
rc = __sigp_sense_running(vcpu, cpu_addr, rc = __sigp_sense_running(vcpu, cpu_addr,
&vcpu->run->s.regs.gprs[r1]); &vcpu->run->s.regs.gprs[r1]);
break; break;
case SIGP_START:
rc = sigp_check_callable(vcpu, cpu_addr);
if (rc == SIGP_CC_ORDER_CODE_ACCEPTED)
rc = -EOPNOTSUPP; /* Handle START in user space */
break;
case SIGP_RESTART: case SIGP_RESTART:
vcpu->stat.instruction_sigp_restart++; vcpu->stat.instruction_sigp_restart++;
rc = __sigp_restart(vcpu, cpu_addr); rc = sigp_check_callable(vcpu, cpu_addr);
if (rc == SIGP_CC_BUSY) if (rc == SIGP_CC_ORDER_CODE_ACCEPTED) {
break; VCPU_EVENT(vcpu, 4,
/* user space must know about restart */ "sigp restart %x to handle userspace",
cpu_addr);
/* user space must know about restart */
rc = -EOPNOTSUPP;
}
break;
default: default:
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
...@@ -393,7 +478,6 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu) ...@@ -393,7 +478,6 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu)
if (rc < 0) if (rc < 0)
return rc; return rc;
vcpu->arch.sie_block->gpsw.mask &= ~(3ul << 44); kvm_s390_set_psw_cc(vcpu, rc);
vcpu->arch.sie_block->gpsw.mask |= (rc & 3ul) << 44;
return 0; return 0;
} }
...@@ -175,6 +175,7 @@ TRACE_EVENT(kvm_s390_intercept_validity, ...@@ -175,6 +175,7 @@ TRACE_EVENT(kvm_s390_intercept_validity,
{SIGP_STOP_AND_STORE_STATUS, "stop and store status"}, \ {SIGP_STOP_AND_STORE_STATUS, "stop and store status"}, \
{SIGP_SET_ARCHITECTURE, "set architecture"}, \ {SIGP_SET_ARCHITECTURE, "set architecture"}, \
{SIGP_SET_PREFIX, "set prefix"}, \ {SIGP_SET_PREFIX, "set prefix"}, \
{SIGP_STORE_STATUS_AT_ADDRESS, "store status at addr"}, \
{SIGP_SENSE_RUNNING, "sense running"}, \ {SIGP_SENSE_RUNNING, "sense running"}, \
{SIGP_RESTART, "restart"} {SIGP_RESTART, "restart"}
......
...@@ -605,6 +605,7 @@ struct kvm_arch { ...@@ -605,6 +605,7 @@ struct kvm_arch {
/* fields used by HYPER-V emulation */ /* fields used by HYPER-V emulation */
u64 hv_guest_os_id; u64 hv_guest_os_id;
u64 hv_hypercall; u64 hv_hypercall;
u64 hv_tsc_page;
#ifdef CONFIG_KVM_MMU_AUDIT #ifdef CONFIG_KVM_MMU_AUDIT
int audit_point; int audit_point;
...@@ -699,6 +700,8 @@ struct kvm_x86_ops { ...@@ -699,6 +700,8 @@ struct kvm_x86_ops {
void (*set_idt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt); void (*set_idt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
void (*get_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt); void (*get_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
void (*set_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt); void (*set_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
u64 (*get_dr6)(struct kvm_vcpu *vcpu);
void (*set_dr6)(struct kvm_vcpu *vcpu, unsigned long value);
void (*set_dr7)(struct kvm_vcpu *vcpu, unsigned long value); void (*set_dr7)(struct kvm_vcpu *vcpu, unsigned long value);
void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg); void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
unsigned long (*get_rflags)(struct kvm_vcpu *vcpu); unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
......
...@@ -100,6 +100,7 @@ ...@@ -100,6 +100,7 @@
#define VMX_MISC_PREEMPTION_TIMER_RATE_MASK 0x0000001f #define VMX_MISC_PREEMPTION_TIMER_RATE_MASK 0x0000001f
#define VMX_MISC_SAVE_EFER_LMA 0x00000020 #define VMX_MISC_SAVE_EFER_LMA 0x00000020
#define VMX_MISC_ACTIVITY_HLT 0x00000040
/* VMCS Encodings */ /* VMCS Encodings */
enum vmcs_field { enum vmcs_field {
......
...@@ -28,6 +28,9 @@ ...@@ -28,6 +28,9 @@
/* Partition Reference Counter (HV_X64_MSR_TIME_REF_COUNT) available*/ /* Partition Reference Counter (HV_X64_MSR_TIME_REF_COUNT) available*/
#define HV_X64_MSR_TIME_REF_COUNT_AVAILABLE (1 << 1) #define HV_X64_MSR_TIME_REF_COUNT_AVAILABLE (1 << 1)
/* A partition's reference time stamp counter (TSC) page */
#define HV_X64_MSR_REFERENCE_TSC 0x40000021
/* /*
* There is a single feature flag that signifies the presence of the MSR * There is a single feature flag that signifies the presence of the MSR
* that can be used to retrieve both the local APIC Timer frequency as * that can be used to retrieve both the local APIC Timer frequency as
...@@ -198,6 +201,9 @@ ...@@ -198,6 +201,9 @@
#define HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_MASK \ #define HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_MASK \
(~((1ull << HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_SHIFT) - 1)) (~((1ull << HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_SHIFT) - 1))
#define HV_X64_MSR_TSC_REFERENCE_ENABLE 0x00000001
#define HV_X64_MSR_TSC_REFERENCE_ADDRESS_SHIFT 12
#define HV_PROCESSOR_POWER_STATE_C0 0 #define HV_PROCESSOR_POWER_STATE_C0 0
#define HV_PROCESSOR_POWER_STATE_C1 1 #define HV_PROCESSOR_POWER_STATE_C1 1
#define HV_PROCESSOR_POWER_STATE_C2 2 #define HV_PROCESSOR_POWER_STATE_C2 2
...@@ -210,4 +216,11 @@ ...@@ -210,4 +216,11 @@
#define HV_STATUS_INVALID_ALIGNMENT 4 #define HV_STATUS_INVALID_ALIGNMENT 4
#define HV_STATUS_INSUFFICIENT_BUFFERS 19 #define HV_STATUS_INSUFFICIENT_BUFFERS 19
typedef struct _HV_REFERENCE_TSC_PAGE {
__u32 tsc_sequence;
__u32 res1;
__u64 tsc_scale;
__s64 tsc_offset;
} HV_REFERENCE_TSC_PAGE, *PHV_REFERENCE_TSC_PAGE;
#endif #endif
...@@ -528,6 +528,7 @@ ...@@ -528,6 +528,7 @@
#define MSR_IA32_VMX_TRUE_PROCBASED_CTLS 0x0000048e #define MSR_IA32_VMX_TRUE_PROCBASED_CTLS 0x0000048e
#define MSR_IA32_VMX_TRUE_EXIT_CTLS 0x0000048f #define MSR_IA32_VMX_TRUE_EXIT_CTLS 0x0000048f
#define MSR_IA32_VMX_TRUE_ENTRY_CTLS 0x00000490 #define MSR_IA32_VMX_TRUE_ENTRY_CTLS 0x00000490
#define MSR_IA32_VMX_VMFUNC 0x00000491
/* VMX_BASIC bits and bitmasks */ /* VMX_BASIC bits and bitmasks */
#define VMX_BASIC_VMCS_SIZE_SHIFT 32 #define VMX_BASIC_VMCS_SIZE_SHIFT 32
......
...@@ -80,7 +80,7 @@ config KVM_MMU_AUDIT ...@@ -80,7 +80,7 @@ config KVM_MMU_AUDIT
depends on KVM && TRACEPOINTS depends on KVM && TRACEPOINTS
---help--- ---help---
This option adds a R/W kVM module parameter 'mmu_audit', which allows This option adds a R/W kVM module parameter 'mmu_audit', which allows
audit KVM MMU at runtime. auditing of KVM MMU events at runtime.
config KVM_DEVICE_ASSIGNMENT config KVM_DEVICE_ASSIGNMENT
bool "KVM legacy PCI device assignment support" bool "KVM legacy PCI device assignment support"
......
...@@ -37,6 +37,7 @@ ...@@ -37,6 +37,7 @@
#include "irq.h" #include "irq.h"
#include "i8254.h" #include "i8254.h"
#include "x86.h"
#ifndef CONFIG_X86_64 #ifndef CONFIG_X86_64
#define mod_64(x, y) ((x) - (y) * div64_u64(x, y)) #define mod_64(x, y) ((x) - (y) * div64_u64(x, y))
...@@ -349,6 +350,23 @@ static void create_pit_timer(struct kvm *kvm, u32 val, int is_period) ...@@ -349,6 +350,23 @@ static void create_pit_timer(struct kvm *kvm, u32 val, int is_period)
atomic_set(&ps->pending, 0); atomic_set(&ps->pending, 0);
ps->irq_ack = 1; ps->irq_ack = 1;
/*
* Do not allow the guest to program periodic timers with small
* interval, since the hrtimers are not throttled by the host
* scheduler.
*/
if (ps->is_periodic) {
s64 min_period = min_timer_period_us * 1000LL;
if (ps->period < min_period) {
pr_info_ratelimited(
"kvm: requested %lld ns "
"i8254 timer period limited to %lld ns\n",
ps->period, min_period);
ps->period = min_period;
}
}
hrtimer_start(&ps->timer, ktime_add_ns(ktime_get(), interval), hrtimer_start(&ps->timer, ktime_add_ns(ktime_get(), interval),
HRTIMER_MODE_ABS); HRTIMER_MODE_ABS);
} }
......
...@@ -71,9 +71,6 @@ ...@@ -71,9 +71,6 @@
#define VEC_POS(v) ((v) & (32 - 1)) #define VEC_POS(v) ((v) & (32 - 1))
#define REG_POS(v) (((v) >> 5) << 4) #define REG_POS(v) (((v) >> 5) << 4)
static unsigned int min_timer_period_us = 500;
module_param(min_timer_period_us, uint, S_IRUGO | S_IWUSR);
static inline void apic_set_reg(struct kvm_lapic *apic, int reg_off, u32 val) static inline void apic_set_reg(struct kvm_lapic *apic, int reg_off, u32 val)
{ {
*((u32 *) (apic->regs + reg_off)) = val; *((u32 *) (apic->regs + reg_off)) = val;
...@@ -435,7 +432,7 @@ static bool pv_eoi_get_pending(struct kvm_vcpu *vcpu) ...@@ -435,7 +432,7 @@ static bool pv_eoi_get_pending(struct kvm_vcpu *vcpu)
u8 val; u8 val;
if (pv_eoi_get_user(vcpu, &val) < 0) if (pv_eoi_get_user(vcpu, &val) < 0)
apic_debug("Can't read EOI MSR value: 0x%llx\n", apic_debug("Can't read EOI MSR value: 0x%llx\n",
(unsigned long long)vcpi->arch.pv_eoi.msr_val); (unsigned long long)vcpu->arch.pv_eoi.msr_val);
return val & 0x1; return val & 0x1;
} }
...@@ -443,7 +440,7 @@ static void pv_eoi_set_pending(struct kvm_vcpu *vcpu) ...@@ -443,7 +440,7 @@ static void pv_eoi_set_pending(struct kvm_vcpu *vcpu)
{ {
if (pv_eoi_put_user(vcpu, KVM_PV_EOI_ENABLED) < 0) { if (pv_eoi_put_user(vcpu, KVM_PV_EOI_ENABLED) < 0) {
apic_debug("Can't set EOI MSR value: 0x%llx\n", apic_debug("Can't set EOI MSR value: 0x%llx\n",
(unsigned long long)vcpi->arch.pv_eoi.msr_val); (unsigned long long)vcpu->arch.pv_eoi.msr_val);
return; return;
} }
__set_bit(KVM_APIC_PV_EOI_PENDING, &vcpu->arch.apic_attention); __set_bit(KVM_APIC_PV_EOI_PENDING, &vcpu->arch.apic_attention);
...@@ -453,7 +450,7 @@ static void pv_eoi_clr_pending(struct kvm_vcpu *vcpu) ...@@ -453,7 +450,7 @@ static void pv_eoi_clr_pending(struct kvm_vcpu *vcpu)
{ {
if (pv_eoi_put_user(vcpu, KVM_PV_EOI_DISABLED) < 0) { if (pv_eoi_put_user(vcpu, KVM_PV_EOI_DISABLED) < 0) {
apic_debug("Can't clear EOI MSR value: 0x%llx\n", apic_debug("Can't clear EOI MSR value: 0x%llx\n",
(unsigned long long)vcpi->arch.pv_eoi.msr_val); (unsigned long long)vcpu->arch.pv_eoi.msr_val);
return; return;
} }
__clear_bit(KVM_APIC_PV_EOI_PENDING, &vcpu->arch.apic_attention); __clear_bit(KVM_APIC_PV_EOI_PENDING, &vcpu->arch.apic_attention);
......
...@@ -2659,6 +2659,9 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t v, int write, ...@@ -2659,6 +2659,9 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t v, int write,
int emulate = 0; int emulate = 0;
gfn_t pseudo_gfn; gfn_t pseudo_gfn;
if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
return 0;
for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) { for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
if (iterator.level == level) { if (iterator.level == level) {
mmu_set_spte(vcpu, iterator.sptep, ACC_ALL, mmu_set_spte(vcpu, iterator.sptep, ACC_ALL,
...@@ -2829,6 +2832,9 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level, ...@@ -2829,6 +2832,9 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
bool ret = false; bool ret = false;
u64 spte = 0ull; u64 spte = 0ull;
if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
return false;
if (!page_fault_can_be_fast(error_code)) if (!page_fault_can_be_fast(error_code))
return false; return false;
...@@ -3224,6 +3230,9 @@ static u64 walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr) ...@@ -3224,6 +3230,9 @@ static u64 walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr)
struct kvm_shadow_walk_iterator iterator; struct kvm_shadow_walk_iterator iterator;
u64 spte = 0ull; u64 spte = 0ull;
if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
return spte;
walk_shadow_page_lockless_begin(vcpu); walk_shadow_page_lockless_begin(vcpu);
for_each_shadow_entry_lockless(vcpu, addr, iterator, spte) for_each_shadow_entry_lockless(vcpu, addr, iterator, spte)
if (!is_shadow_present_pte(spte)) if (!is_shadow_present_pte(spte))
...@@ -4510,6 +4519,9 @@ int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 sptes[4]) ...@@ -4510,6 +4519,9 @@ int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 sptes[4])
u64 spte; u64 spte;
int nr_sptes = 0; int nr_sptes = 0;
if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
return nr_sptes;
walk_shadow_page_lockless_begin(vcpu); walk_shadow_page_lockless_begin(vcpu);
for_each_shadow_entry_lockless(vcpu, addr, iterator, spte) { for_each_shadow_entry_lockless(vcpu, addr, iterator, spte) {
sptes[iterator.level-1] = spte; sptes[iterator.level-1] = spte;
......
...@@ -569,6 +569,9 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, ...@@ -569,6 +569,9 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
if (FNAME(gpte_changed)(vcpu, gw, top_level)) if (FNAME(gpte_changed)(vcpu, gw, top_level))
goto out_gpte_changed; goto out_gpte_changed;
if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
goto out_gpte_changed;
for (shadow_walk_init(&it, vcpu, addr); for (shadow_walk_init(&it, vcpu, addr);
shadow_walk_okay(&it) && it.level > gw->level; shadow_walk_okay(&it) && it.level > gw->level;
shadow_walk_next(&it)) { shadow_walk_next(&it)) {
...@@ -820,6 +823,11 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva) ...@@ -820,6 +823,11 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
*/ */
mmu_topup_memory_caches(vcpu); mmu_topup_memory_caches(vcpu);
if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) {
WARN_ON(1);
return;
}
spin_lock(&vcpu->kvm->mmu_lock); spin_lock(&vcpu->kvm->mmu_lock);
for_each_shadow_entry(vcpu, gva, iterator) { for_each_shadow_entry(vcpu, gva, iterator) {
level = iterator.level; level = iterator.level;
......
...@@ -1671,6 +1671,19 @@ static void new_asid(struct vcpu_svm *svm, struct svm_cpu_data *sd) ...@@ -1671,6 +1671,19 @@ static void new_asid(struct vcpu_svm *svm, struct svm_cpu_data *sd)
mark_dirty(svm->vmcb, VMCB_ASID); mark_dirty(svm->vmcb, VMCB_ASID);
} }
static u64 svm_get_dr6(struct kvm_vcpu *vcpu)
{
return to_svm(vcpu)->vmcb->save.dr6;
}
static void svm_set_dr6(struct kvm_vcpu *vcpu, unsigned long value)
{
struct vcpu_svm *svm = to_svm(vcpu);
svm->vmcb->save.dr6 = value;
mark_dirty(svm->vmcb, VMCB_DR);
}
static void svm_set_dr7(struct kvm_vcpu *vcpu, unsigned long value) static void svm_set_dr7(struct kvm_vcpu *vcpu, unsigned long value)
{ {
struct vcpu_svm *svm = to_svm(vcpu); struct vcpu_svm *svm = to_svm(vcpu);
...@@ -4286,6 +4299,8 @@ static struct kvm_x86_ops svm_x86_ops = { ...@@ -4286,6 +4299,8 @@ static struct kvm_x86_ops svm_x86_ops = {
.set_idt = svm_set_idt, .set_idt = svm_set_idt,
.get_gdt = svm_get_gdt, .get_gdt = svm_get_gdt,
.set_gdt = svm_set_gdt, .set_gdt = svm_set_gdt,
.get_dr6 = svm_get_dr6,
.set_dr6 = svm_set_dr6,
.set_dr7 = svm_set_dr7, .set_dr7 = svm_set_dr7,
.cache_reg = svm_cache_reg, .cache_reg = svm_cache_reg,
.get_rflags = svm_get_rflags, .get_rflags = svm_get_rflags,
......
This diff is collapsed.
...@@ -94,6 +94,9 @@ EXPORT_SYMBOL_GPL(kvm_x86_ops); ...@@ -94,6 +94,9 @@ EXPORT_SYMBOL_GPL(kvm_x86_ops);
static bool ignore_msrs = 0; static bool ignore_msrs = 0;
module_param(ignore_msrs, bool, S_IRUGO | S_IWUSR); module_param(ignore_msrs, bool, S_IRUGO | S_IWUSR);
unsigned int min_timer_period_us = 500;
module_param(min_timer_period_us, uint, S_IRUGO | S_IWUSR);
bool kvm_has_tsc_control; bool kvm_has_tsc_control;
EXPORT_SYMBOL_GPL(kvm_has_tsc_control); EXPORT_SYMBOL_GPL(kvm_has_tsc_control);
u32 kvm_max_guest_tsc_khz; u32 kvm_max_guest_tsc_khz;
...@@ -719,6 +722,12 @@ unsigned long kvm_get_cr8(struct kvm_vcpu *vcpu) ...@@ -719,6 +722,12 @@ unsigned long kvm_get_cr8(struct kvm_vcpu *vcpu)
} }
EXPORT_SYMBOL_GPL(kvm_get_cr8); EXPORT_SYMBOL_GPL(kvm_get_cr8);
static void kvm_update_dr6(struct kvm_vcpu *vcpu)
{
if (!(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP))
kvm_x86_ops->set_dr6(vcpu, vcpu->arch.dr6);
}
static void kvm_update_dr7(struct kvm_vcpu *vcpu) static void kvm_update_dr7(struct kvm_vcpu *vcpu)
{ {
unsigned long dr7; unsigned long dr7;
...@@ -747,6 +756,7 @@ static int __kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val) ...@@ -747,6 +756,7 @@ static int __kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val)
if (val & 0xffffffff00000000ULL) if (val & 0xffffffff00000000ULL)
return -1; /* #GP */ return -1; /* #GP */
vcpu->arch.dr6 = (val & DR6_VOLATILE) | DR6_FIXED_1; vcpu->arch.dr6 = (val & DR6_VOLATILE) | DR6_FIXED_1;
kvm_update_dr6(vcpu);
break; break;
case 5: case 5:
if (kvm_read_cr4_bits(vcpu, X86_CR4_DE)) if (kvm_read_cr4_bits(vcpu, X86_CR4_DE))
...@@ -788,7 +798,10 @@ static int _kvm_get_dr(struct kvm_vcpu *vcpu, int dr, unsigned long *val) ...@@ -788,7 +798,10 @@ static int _kvm_get_dr(struct kvm_vcpu *vcpu, int dr, unsigned long *val)
return 1; return 1;
/* fall through */ /* fall through */
case 6: case 6:
*val = vcpu->arch.dr6; if (vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP)
*val = vcpu->arch.dr6;
else
*val = kvm_x86_ops->get_dr6(vcpu);
break; break;
case 5: case 5:
if (kvm_read_cr4_bits(vcpu, X86_CR4_DE)) if (kvm_read_cr4_bits(vcpu, X86_CR4_DE))
...@@ -836,11 +849,12 @@ EXPORT_SYMBOL_GPL(kvm_rdpmc); ...@@ -836,11 +849,12 @@ EXPORT_SYMBOL_GPL(kvm_rdpmc);
* kvm-specific. Those are put in the beginning of the list. * kvm-specific. Those are put in the beginning of the list.
*/ */
#define KVM_SAVE_MSRS_BEGIN 10 #define KVM_SAVE_MSRS_BEGIN 12
static u32 msrs_to_save[] = { static u32 msrs_to_save[] = {
MSR_KVM_SYSTEM_TIME, MSR_KVM_WALL_CLOCK, MSR_KVM_SYSTEM_TIME, MSR_KVM_WALL_CLOCK,
MSR_KVM_SYSTEM_TIME_NEW, MSR_KVM_WALL_CLOCK_NEW, MSR_KVM_SYSTEM_TIME_NEW, MSR_KVM_WALL_CLOCK_NEW,
HV_X64_MSR_GUEST_OS_ID, HV_X64_MSR_HYPERCALL, HV_X64_MSR_GUEST_OS_ID, HV_X64_MSR_HYPERCALL,
HV_X64_MSR_TIME_REF_COUNT, HV_X64_MSR_REFERENCE_TSC,
HV_X64_MSR_APIC_ASSIST_PAGE, MSR_KVM_ASYNC_PF_EN, MSR_KVM_STEAL_TIME, HV_X64_MSR_APIC_ASSIST_PAGE, MSR_KVM_ASYNC_PF_EN, MSR_KVM_STEAL_TIME,
MSR_KVM_PV_EOI_EN, MSR_KVM_PV_EOI_EN,
MSR_IA32_SYSENTER_CS, MSR_IA32_SYSENTER_ESP, MSR_IA32_SYSENTER_EIP, MSR_IA32_SYSENTER_CS, MSR_IA32_SYSENTER_ESP, MSR_IA32_SYSENTER_EIP,
...@@ -1275,8 +1289,6 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr) ...@@ -1275,8 +1289,6 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr)
kvm->arch.last_tsc_write = data; kvm->arch.last_tsc_write = data;
kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz;
/* Reset of TSC must disable overshoot protection below */
vcpu->arch.hv_clock.tsc_timestamp = 0;
vcpu->arch.last_guest_tsc = data; vcpu->arch.last_guest_tsc = data;
/* Keep track of which generation this VCPU has synchronized to */ /* Keep track of which generation this VCPU has synchronized to */
...@@ -1484,7 +1496,7 @@ static int kvm_guest_time_update(struct kvm_vcpu *v) ...@@ -1484,7 +1496,7 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
unsigned long flags, this_tsc_khz; unsigned long flags, this_tsc_khz;
struct kvm_vcpu_arch *vcpu = &v->arch; struct kvm_vcpu_arch *vcpu = &v->arch;
struct kvm_arch *ka = &v->kvm->arch; struct kvm_arch *ka = &v->kvm->arch;
s64 kernel_ns, max_kernel_ns; s64 kernel_ns;
u64 tsc_timestamp, host_tsc; u64 tsc_timestamp, host_tsc;
struct pvclock_vcpu_time_info guest_hv_clock; struct pvclock_vcpu_time_info guest_hv_clock;
u8 pvclock_flags; u8 pvclock_flags;
...@@ -1543,37 +1555,6 @@ static int kvm_guest_time_update(struct kvm_vcpu *v) ...@@ -1543,37 +1555,6 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
if (!vcpu->pv_time_enabled) if (!vcpu->pv_time_enabled)
return 0; return 0;
/*
* Time as measured by the TSC may go backwards when resetting the base
* tsc_timestamp. The reason for this is that the TSC resolution is
* higher than the resolution of the other clock scales. Thus, many
* possible measurments of the TSC correspond to one measurement of any
* other clock, and so a spread of values is possible. This is not a
* problem for the computation of the nanosecond clock; with TSC rates
* around 1GHZ, there can only be a few cycles which correspond to one
* nanosecond value, and any path through this code will inevitably
* take longer than that. However, with the kernel_ns value itself,
* the precision may be much lower, down to HZ granularity. If the
* first sampling of TSC against kernel_ns ends in the low part of the
* range, and the second in the high end of the range, we can get:
*
* (TSC - offset_low) * S + kns_old > (TSC - offset_high) * S + kns_new
*
* As the sampling errors potentially range in the thousands of cycles,
* it is possible such a time value has already been observed by the
* guest. To protect against this, we must compute the system time as
* observed by the guest and ensure the new system time is greater.
*/
max_kernel_ns = 0;
if (vcpu->hv_clock.tsc_timestamp) {
max_kernel_ns = vcpu->last_guest_tsc -
vcpu->hv_clock.tsc_timestamp;
max_kernel_ns = pvclock_scale_delta(max_kernel_ns,
vcpu->hv_clock.tsc_to_system_mul,
vcpu->hv_clock.tsc_shift);
max_kernel_ns += vcpu->last_kernel_ns;
}
if (unlikely(vcpu->hw_tsc_khz != this_tsc_khz)) { if (unlikely(vcpu->hw_tsc_khz != this_tsc_khz)) {
kvm_get_time_scale(NSEC_PER_SEC / 1000, this_tsc_khz, kvm_get_time_scale(NSEC_PER_SEC / 1000, this_tsc_khz,
&vcpu->hv_clock.tsc_shift, &vcpu->hv_clock.tsc_shift,
...@@ -1581,14 +1562,6 @@ static int kvm_guest_time_update(struct kvm_vcpu *v) ...@@ -1581,14 +1562,6 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
vcpu->hw_tsc_khz = this_tsc_khz; vcpu->hw_tsc_khz = this_tsc_khz;
} }
/* with a master <monotonic time, tsc value> tuple,
* pvclock clock reads always increase at the (scaled) rate
* of guest TSC - no need to deal with sampling errors.
*/
if (!use_master_clock) {
if (max_kernel_ns > kernel_ns)
kernel_ns = max_kernel_ns;
}
/* With all the info we got, fill in the values */ /* With all the info we got, fill in the values */
vcpu->hv_clock.tsc_timestamp = tsc_timestamp; vcpu->hv_clock.tsc_timestamp = tsc_timestamp;
vcpu->hv_clock.system_time = kernel_ns + v->kvm->arch.kvmclock_offset; vcpu->hv_clock.system_time = kernel_ns + v->kvm->arch.kvmclock_offset;
...@@ -1826,6 +1799,8 @@ static bool kvm_hv_msr_partition_wide(u32 msr) ...@@ -1826,6 +1799,8 @@ static bool kvm_hv_msr_partition_wide(u32 msr)
switch (msr) { switch (msr) {
case HV_X64_MSR_GUEST_OS_ID: case HV_X64_MSR_GUEST_OS_ID:
case HV_X64_MSR_HYPERCALL: case HV_X64_MSR_HYPERCALL:
case HV_X64_MSR_REFERENCE_TSC:
case HV_X64_MSR_TIME_REF_COUNT:
r = true; r = true;
break; break;
} }
...@@ -1867,6 +1842,20 @@ static int set_msr_hyperv_pw(struct kvm_vcpu *vcpu, u32 msr, u64 data) ...@@ -1867,6 +1842,20 @@ static int set_msr_hyperv_pw(struct kvm_vcpu *vcpu, u32 msr, u64 data)
kvm->arch.hv_hypercall = data; kvm->arch.hv_hypercall = data;
break; break;
} }
case HV_X64_MSR_REFERENCE_TSC: {
u64 gfn;
HV_REFERENCE_TSC_PAGE tsc_ref;
memset(&tsc_ref, 0, sizeof(tsc_ref));
kvm->arch.hv_tsc_page = data;
if (!(data & HV_X64_MSR_TSC_REFERENCE_ENABLE))
break;
gfn = data >> HV_X64_MSR_TSC_REFERENCE_ADDRESS_SHIFT;
if (kvm_write_guest(kvm, data,
&tsc_ref, sizeof(tsc_ref)))
return 1;
mark_page_dirty(kvm, gfn);
break;
}
default: default:
vcpu_unimpl(vcpu, "HYPER-V unimplemented wrmsr: 0x%x " vcpu_unimpl(vcpu, "HYPER-V unimplemented wrmsr: 0x%x "
"data 0x%llx\n", msr, data); "data 0x%llx\n", msr, data);
...@@ -2291,6 +2280,14 @@ static int get_msr_hyperv_pw(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata) ...@@ -2291,6 +2280,14 @@ static int get_msr_hyperv_pw(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
case HV_X64_MSR_HYPERCALL: case HV_X64_MSR_HYPERCALL:
data = kvm->arch.hv_hypercall; data = kvm->arch.hv_hypercall;
break; break;
case HV_X64_MSR_TIME_REF_COUNT: {
data =
div_u64(get_kernel_ns() + kvm->arch.kvmclock_offset, 100);
break;
}
case HV_X64_MSR_REFERENCE_TSC:
data = kvm->arch.hv_tsc_page;
break;
default: default:
vcpu_unimpl(vcpu, "Hyper-V unhandled rdmsr: 0x%x\n", msr); vcpu_unimpl(vcpu, "Hyper-V unhandled rdmsr: 0x%x\n", msr);
return 1; return 1;
...@@ -2604,6 +2601,7 @@ int kvm_dev_ioctl_check_extension(long ext) ...@@ -2604,6 +2601,7 @@ int kvm_dev_ioctl_check_extension(long ext)
#ifdef CONFIG_KVM_DEVICE_ASSIGNMENT #ifdef CONFIG_KVM_DEVICE_ASSIGNMENT
case KVM_CAP_ASSIGN_DEV_IRQ: case KVM_CAP_ASSIGN_DEV_IRQ:
case KVM_CAP_PCI_2_3: case KVM_CAP_PCI_2_3:
case KVM_CAP_HYPERV_TIME:
#endif #endif
r = 1; r = 1;
break; break;
...@@ -2972,8 +2970,11 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, ...@@ -2972,8 +2970,11 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
static void kvm_vcpu_ioctl_x86_get_debugregs(struct kvm_vcpu *vcpu, static void kvm_vcpu_ioctl_x86_get_debugregs(struct kvm_vcpu *vcpu,
struct kvm_debugregs *dbgregs) struct kvm_debugregs *dbgregs)
{ {
unsigned long val;
memcpy(dbgregs->db, vcpu->arch.db, sizeof(vcpu->arch.db)); memcpy(dbgregs->db, vcpu->arch.db, sizeof(vcpu->arch.db));
dbgregs->dr6 = vcpu->arch.dr6; _kvm_get_dr(vcpu, 6, &val);
dbgregs->dr6 = val;
dbgregs->dr7 = vcpu->arch.dr7; dbgregs->dr7 = vcpu->arch.dr7;
dbgregs->flags = 0; dbgregs->flags = 0;
memset(&dbgregs->reserved, 0, sizeof(dbgregs->reserved)); memset(&dbgregs->reserved, 0, sizeof(dbgregs->reserved));
...@@ -2987,7 +2988,9 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu, ...@@ -2987,7 +2988,9 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
memcpy(vcpu->arch.db, dbgregs->db, sizeof(vcpu->arch.db)); memcpy(vcpu->arch.db, dbgregs->db, sizeof(vcpu->arch.db));
vcpu->arch.dr6 = dbgregs->dr6; vcpu->arch.dr6 = dbgregs->dr6;
kvm_update_dr6(vcpu);
vcpu->arch.dr7 = dbgregs->dr7; vcpu->arch.dr7 = dbgregs->dr7;
kvm_update_dr7(vcpu);
return 0; return 0;
} }
...@@ -5834,6 +5837,11 @@ static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu) ...@@ -5834,6 +5837,11 @@ static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu)
kvm_apic_update_tmr(vcpu, tmr); kvm_apic_update_tmr(vcpu, tmr);
} }
/*
* Returns 1 to let __vcpu_run() continue the guest execution loop without
* exiting to the userspace. Otherwise, the value will be returned to the
* userspace.
*/
static int vcpu_enter_guest(struct kvm_vcpu *vcpu) static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
{ {
int r; int r;
...@@ -6089,7 +6097,7 @@ static int __vcpu_run(struct kvm_vcpu *vcpu) ...@@ -6089,7 +6097,7 @@ static int __vcpu_run(struct kvm_vcpu *vcpu)
} }
if (need_resched()) { if (need_resched()) {
srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx); srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx);
kvm_resched(vcpu); cond_resched();
vcpu->srcu_idx = srcu_read_lock(&kvm->srcu); vcpu->srcu_idx = srcu_read_lock(&kvm->srcu);
} }
} }
...@@ -6717,6 +6725,7 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu) ...@@ -6717,6 +6725,7 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu)
memset(vcpu->arch.db, 0, sizeof(vcpu->arch.db)); memset(vcpu->arch.db, 0, sizeof(vcpu->arch.db));
vcpu->arch.dr6 = DR6_FIXED_1; vcpu->arch.dr6 = DR6_FIXED_1;
kvm_update_dr6(vcpu);
vcpu->arch.dr7 = DR7_FIXED_1; vcpu->arch.dr7 = DR7_FIXED_1;
kvm_update_dr7(vcpu); kvm_update_dr7(vcpu);
......
...@@ -125,5 +125,7 @@ int kvm_write_guest_virt_system(struct x86_emulate_ctxt *ctxt, ...@@ -125,5 +125,7 @@ int kvm_write_guest_virt_system(struct x86_emulate_ctxt *ctxt,
#define KVM_SUPPORTED_XCR0 (XSTATE_FP | XSTATE_SSE | XSTATE_YMM) #define KVM_SUPPORTED_XCR0 (XSTATE_FP | XSTATE_SSE | XSTATE_YMM)
extern u64 host_xcr0; extern u64 host_xcr0;
extern unsigned int min_timer_period_us;
extern struct static_key kvm_no_apic_vcpu; extern struct static_key kvm_no_apic_vcpu;
#endif #endif
...@@ -144,7 +144,7 @@ struct kvm_run; ...@@ -144,7 +144,7 @@ struct kvm_run;
struct kvm_exit_mmio; struct kvm_exit_mmio;
#ifdef CONFIG_KVM_ARM_VGIC #ifdef CONFIG_KVM_ARM_VGIC
int kvm_vgic_set_addr(struct kvm *kvm, unsigned long type, u64 addr); int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write);
int kvm_vgic_hyp_init(void); int kvm_vgic_hyp_init(void);
int kvm_vgic_init(struct kvm *kvm); int kvm_vgic_init(struct kvm *kvm);
int kvm_vgic_create(struct kvm *kvm); int kvm_vgic_create(struct kvm *kvm);
......
...@@ -17,6 +17,9 @@ ...@@ -17,6 +17,9 @@
#define GIC_CPU_EOI 0x10 #define GIC_CPU_EOI 0x10
#define GIC_CPU_RUNNINGPRI 0x14 #define GIC_CPU_RUNNINGPRI 0x14
#define GIC_CPU_HIGHPRI 0x18 #define GIC_CPU_HIGHPRI 0x18
#define GIC_CPU_ALIAS_BINPOINT 0x1c
#define GIC_CPU_ACTIVEPRIO 0xd0
#define GIC_CPU_IDENT 0xfc
#define GIC_DIST_CTRL 0x000 #define GIC_DIST_CTRL 0x000
#define GIC_DIST_CTR 0x004 #define GIC_DIST_CTR 0x004
...@@ -56,6 +59,15 @@ ...@@ -56,6 +59,15 @@
#define GICH_LR_ACTIVE_BIT (1 << 29) #define GICH_LR_ACTIVE_BIT (1 << 29)
#define GICH_LR_EOI (1 << 19) #define GICH_LR_EOI (1 << 19)
#define GICH_VMCR_CTRL_SHIFT 0
#define GICH_VMCR_CTRL_MASK (0x21f << GICH_VMCR_CTRL_SHIFT)
#define GICH_VMCR_PRIMASK_SHIFT 27
#define GICH_VMCR_PRIMASK_MASK (0x1f << GICH_VMCR_PRIMASK_SHIFT)
#define GICH_VMCR_BINPOINT_SHIFT 21
#define GICH_VMCR_BINPOINT_MASK (0x7 << GICH_VMCR_BINPOINT_SHIFT)
#define GICH_VMCR_ALIAS_BINPOINT_SHIFT 18
#define GICH_VMCR_ALIAS_BINPOINT_MASK (0x7 << GICH_VMCR_ALIAS_BINPOINT_SHIFT)
#define GICH_MISR_EOI (1 << 0) #define GICH_MISR_EOI (1 << 0)
#define GICH_MISR_U (1 << 1) #define GICH_MISR_U (1 << 1)
......
...@@ -172,8 +172,6 @@ int kvm_io_bus_write_cookie(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr, ...@@ -172,8 +172,6 @@ int kvm_io_bus_write_cookie(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
int len, const void *val, long cookie); int len, const void *val, long cookie);
int kvm_io_bus_read(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr, int len, int kvm_io_bus_read(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr, int len,
void *val); void *val);
int kvm_io_bus_read_cookie(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
int len, void *val, long cookie);
int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr, int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
int len, struct kvm_io_device *dev); int len, struct kvm_io_device *dev);
int kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx, int kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
...@@ -463,8 +461,6 @@ void kvm_exit(void); ...@@ -463,8 +461,6 @@ void kvm_exit(void);
void kvm_get_kvm(struct kvm *kvm); void kvm_get_kvm(struct kvm *kvm);
void kvm_put_kvm(struct kvm *kvm); void kvm_put_kvm(struct kvm *kvm);
void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new,
u64 last_generation);
static inline struct kvm_memslots *kvm_memslots(struct kvm *kvm) static inline struct kvm_memslots *kvm_memslots(struct kvm *kvm)
{ {
...@@ -537,7 +533,6 @@ unsigned long gfn_to_hva_prot(struct kvm *kvm, gfn_t gfn, bool *writable); ...@@ -537,7 +533,6 @@ unsigned long gfn_to_hva_prot(struct kvm *kvm, gfn_t gfn, bool *writable);
unsigned long gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn); unsigned long gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn);
void kvm_release_page_clean(struct page *page); void kvm_release_page_clean(struct page *page);
void kvm_release_page_dirty(struct page *page); void kvm_release_page_dirty(struct page *page);
void kvm_set_page_dirty(struct page *page);
void kvm_set_page_accessed(struct page *page); void kvm_set_page_accessed(struct page *page);
pfn_t gfn_to_pfn_atomic(struct kvm *kvm, gfn_t gfn); pfn_t gfn_to_pfn_atomic(struct kvm *kvm, gfn_t gfn);
...@@ -549,7 +544,6 @@ pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, ...@@ -549,7 +544,6 @@ pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault,
pfn_t gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn); pfn_t gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn);
pfn_t gfn_to_pfn_memslot_atomic(struct kvm_memory_slot *slot, gfn_t gfn); pfn_t gfn_to_pfn_memslot_atomic(struct kvm_memory_slot *slot, gfn_t gfn);
void kvm_release_pfn_dirty(pfn_t pfn);
void kvm_release_pfn_clean(pfn_t pfn); void kvm_release_pfn_clean(pfn_t pfn);
void kvm_set_pfn_dirty(pfn_t pfn); void kvm_set_pfn_dirty(pfn_t pfn);
void kvm_set_pfn_accessed(pfn_t pfn); void kvm_set_pfn_accessed(pfn_t pfn);
...@@ -576,14 +570,11 @@ struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn); ...@@ -576,14 +570,11 @@ struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn);
int kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn); int kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn);
unsigned long kvm_host_page_size(struct kvm *kvm, gfn_t gfn); unsigned long kvm_host_page_size(struct kvm *kvm, gfn_t gfn);
void mark_page_dirty(struct kvm *kvm, gfn_t gfn); void mark_page_dirty(struct kvm *kvm, gfn_t gfn);
void mark_page_dirty_in_slot(struct kvm *kvm, struct kvm_memory_slot *memslot,
gfn_t gfn);
void kvm_vcpu_block(struct kvm_vcpu *vcpu); void kvm_vcpu_block(struct kvm_vcpu *vcpu);
void kvm_vcpu_kick(struct kvm_vcpu *vcpu); void kvm_vcpu_kick(struct kvm_vcpu *vcpu);
bool kvm_vcpu_yield_to(struct kvm_vcpu *target); bool kvm_vcpu_yield_to(struct kvm_vcpu *target);
void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu); void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu);
void kvm_resched(struct kvm_vcpu *vcpu);
void kvm_load_guest_fpu(struct kvm_vcpu *vcpu); void kvm_load_guest_fpu(struct kvm_vcpu *vcpu);
void kvm_put_guest_fpu(struct kvm_vcpu *vcpu); void kvm_put_guest_fpu(struct kvm_vcpu *vcpu);
...@@ -605,8 +596,6 @@ int kvm_get_dirty_log(struct kvm *kvm, ...@@ -605,8 +596,6 @@ int kvm_get_dirty_log(struct kvm *kvm,
int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
struct kvm_dirty_log *log); struct kvm_dirty_log *log);
int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
struct kvm_userspace_memory_region *mem);
int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level, int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
bool line_status); bool line_status);
long kvm_arch_vm_ioctl(struct file *filp, long kvm_arch_vm_ioctl(struct file *filp,
...@@ -654,8 +643,6 @@ void kvm_arch_check_processor_compat(void *rtn); ...@@ -654,8 +643,6 @@ void kvm_arch_check_processor_compat(void *rtn);
int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu); int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu);
int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu); int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu);
void kvm_free_physmem(struct kvm *kvm);
void *kvm_kvzalloc(unsigned long size); void *kvm_kvzalloc(unsigned long size);
void kvm_kvfree(const void *addr); void kvm_kvfree(const void *addr);
...@@ -1076,6 +1063,7 @@ struct kvm_device *kvm_device_from_filp(struct file *filp); ...@@ -1076,6 +1063,7 @@ struct kvm_device *kvm_device_from_filp(struct file *filp);
extern struct kvm_device_ops kvm_mpic_ops; extern struct kvm_device_ops kvm_mpic_ops;
extern struct kvm_device_ops kvm_xics_ops; extern struct kvm_device_ops kvm_xics_ops;
extern struct kvm_device_ops kvm_vfio_ops; extern struct kvm_device_ops kvm_vfio_ops;
extern struct kvm_device_ops kvm_arm_vgic_v2_ops;
#ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT #ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT
...@@ -1097,12 +1085,6 @@ static inline void kvm_vcpu_set_in_spin_loop(struct kvm_vcpu *vcpu, bool val) ...@@ -1097,12 +1085,6 @@ static inline void kvm_vcpu_set_in_spin_loop(struct kvm_vcpu *vcpu, bool val)
static inline void kvm_vcpu_set_dy_eligible(struct kvm_vcpu *vcpu, bool val) static inline void kvm_vcpu_set_dy_eligible(struct kvm_vcpu *vcpu, bool val)
{ {
} }
static inline bool kvm_vcpu_eligible_for_directed_yield(struct kvm_vcpu *vcpu)
{
return true;
}
#endif /* CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT */ #endif /* CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT */
#endif #endif
...@@ -674,6 +674,7 @@ struct kvm_ppc_smmu_info { ...@@ -674,6 +674,7 @@ struct kvm_ppc_smmu_info {
#define KVM_CAP_ARM_EL1_32BIT 93 #define KVM_CAP_ARM_EL1_32BIT 93
#define KVM_CAP_SPAPR_MULTITCE 94 #define KVM_CAP_SPAPR_MULTITCE 94
#define KVM_CAP_EXT_EMUL_CPUID 95 #define KVM_CAP_EXT_EMUL_CPUID 95
#define KVM_CAP_HYPERV_TIME 96
#ifdef KVM_CAP_IRQ_ROUTING #ifdef KVM_CAP_IRQ_ROUTING
...@@ -853,6 +854,7 @@ struct kvm_device_attr { ...@@ -853,6 +854,7 @@ struct kvm_device_attr {
#define KVM_DEV_VFIO_GROUP 1 #define KVM_DEV_VFIO_GROUP 1
#define KVM_DEV_VFIO_GROUP_ADD 1 #define KVM_DEV_VFIO_GROUP_ADD 1
#define KVM_DEV_VFIO_GROUP_DEL 2 #define KVM_DEV_VFIO_GROUP_DEL 2
#define KVM_DEV_TYPE_ARM_VGIC_V2 5
/* /*
* ioctls for VM fds * ioctls for VM fds
......
...@@ -182,6 +182,40 @@ static void kvm_timer_init_interrupt(void *info) ...@@ -182,6 +182,40 @@ static void kvm_timer_init_interrupt(void *info)
enable_percpu_irq(host_vtimer_irq, 0); enable_percpu_irq(host_vtimer_irq, 0);
} }
int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
{
struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
switch (regid) {
case KVM_REG_ARM_TIMER_CTL:
timer->cntv_ctl = value;
break;
case KVM_REG_ARM_TIMER_CNT:
vcpu->kvm->arch.timer.cntvoff = kvm_phys_timer_read() - value;
break;
case KVM_REG_ARM_TIMER_CVAL:
timer->cntv_cval = value;
break;
default:
return -1;
}
return 0;
}
u64 kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, u64 regid)
{
struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
switch (regid) {
case KVM_REG_ARM_TIMER_CTL:
return timer->cntv_ctl;
case KVM_REG_ARM_TIMER_CNT:
return kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
case KVM_REG_ARM_TIMER_CVAL:
return timer->cntv_cval;
}
return (u64)-1;
}
static int kvm_timer_cpu_notify(struct notifier_block *self, static int kvm_timer_cpu_notify(struct notifier_block *self,
unsigned long action, void *cpu) unsigned long action, void *cpu)
......
This diff is collapsed.
...@@ -520,7 +520,7 @@ static int ioapic_mmio_write(struct kvm_io_device *this, gpa_t addr, int len, ...@@ -520,7 +520,7 @@ static int ioapic_mmio_write(struct kvm_io_device *this, gpa_t addr, int len,
return 0; return 0;
} }
void kvm_ioapic_reset(struct kvm_ioapic *ioapic) static void kvm_ioapic_reset(struct kvm_ioapic *ioapic)
{ {
int i; int i;
......
...@@ -91,7 +91,6 @@ void kvm_ioapic_destroy(struct kvm *kvm); ...@@ -91,7 +91,6 @@ void kvm_ioapic_destroy(struct kvm *kvm);
int kvm_ioapic_set_irq(struct kvm_ioapic *ioapic, int irq, int irq_source_id, int kvm_ioapic_set_irq(struct kvm_ioapic *ioapic, int irq, int irq_source_id,
int level, bool line_status); int level, bool line_status);
void kvm_ioapic_clear_all(struct kvm_ioapic *ioapic, int irq_source_id); void kvm_ioapic_clear_all(struct kvm_ioapic *ioapic, int irq_source_id);
void kvm_ioapic_reset(struct kvm_ioapic *ioapic);
int kvm_irq_delivery_to_apic(struct kvm *kvm, struct kvm_lapic *src, int kvm_irq_delivery_to_apic(struct kvm *kvm, struct kvm_lapic *src,
struct kvm_lapic_irq *irq, unsigned long *dest_map); struct kvm_lapic_irq *irq, unsigned long *dest_map);
int kvm_get_ioapic(struct kvm *kvm, struct kvm_ioapic_state *state); int kvm_get_ioapic(struct kvm *kvm, struct kvm_ioapic_state *state);
......
...@@ -95,6 +95,12 @@ static int hardware_enable_all(void); ...@@ -95,6 +95,12 @@ static int hardware_enable_all(void);
static void hardware_disable_all(void); static void hardware_disable_all(void);
static void kvm_io_bus_destroy(struct kvm_io_bus *bus); static void kvm_io_bus_destroy(struct kvm_io_bus *bus);
static void update_memslots(struct kvm_memslots *slots,
struct kvm_memory_slot *new, u64 last_generation);
static void kvm_release_pfn_dirty(pfn_t pfn);
static void mark_page_dirty_in_slot(struct kvm *kvm,
struct kvm_memory_slot *memslot, gfn_t gfn);
bool kvm_rebooting; bool kvm_rebooting;
EXPORT_SYMBOL_GPL(kvm_rebooting); EXPORT_SYMBOL_GPL(kvm_rebooting);
...@@ -553,7 +559,7 @@ static void kvm_free_physmem_slot(struct kvm *kvm, struct kvm_memory_slot *free, ...@@ -553,7 +559,7 @@ static void kvm_free_physmem_slot(struct kvm *kvm, struct kvm_memory_slot *free,
free->npages = 0; free->npages = 0;
} }
void kvm_free_physmem(struct kvm *kvm) static void kvm_free_physmem(struct kvm *kvm)
{ {
struct kvm_memslots *slots = kvm->memslots; struct kvm_memslots *slots = kvm->memslots;
struct kvm_memory_slot *memslot; struct kvm_memory_slot *memslot;
...@@ -675,8 +681,9 @@ static void sort_memslots(struct kvm_memslots *slots) ...@@ -675,8 +681,9 @@ static void sort_memslots(struct kvm_memslots *slots)
slots->id_to_index[slots->memslots[i].id] = i; slots->id_to_index[slots->memslots[i].id] = i;
} }
void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new, static void update_memslots(struct kvm_memslots *slots,
u64 last_generation) struct kvm_memory_slot *new,
u64 last_generation)
{ {
if (new) { if (new) {
int id = new->id; int id = new->id;
...@@ -924,8 +931,8 @@ int kvm_set_memory_region(struct kvm *kvm, ...@@ -924,8 +931,8 @@ int kvm_set_memory_region(struct kvm *kvm,
} }
EXPORT_SYMBOL_GPL(kvm_set_memory_region); EXPORT_SYMBOL_GPL(kvm_set_memory_region);
int kvm_vm_ioctl_set_memory_region(struct kvm *kvm, static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
struct kvm_userspace_memory_region *mem) struct kvm_userspace_memory_region *mem)
{ {
if (mem->slot >= KVM_USER_MEM_SLOTS) if (mem->slot >= KVM_USER_MEM_SLOTS)
return -EINVAL; return -EINVAL;
...@@ -1047,7 +1054,7 @@ static unsigned long gfn_to_hva_many(struct kvm_memory_slot *slot, gfn_t gfn, ...@@ -1047,7 +1054,7 @@ static unsigned long gfn_to_hva_many(struct kvm_memory_slot *slot, gfn_t gfn,
} }
unsigned long gfn_to_hva_memslot(struct kvm_memory_slot *slot, unsigned long gfn_to_hva_memslot(struct kvm_memory_slot *slot,
gfn_t gfn) gfn_t gfn)
{ {
return gfn_to_hva_many(slot, gfn, NULL); return gfn_to_hva_many(slot, gfn, NULL);
} }
...@@ -1387,18 +1394,11 @@ void kvm_release_page_dirty(struct page *page) ...@@ -1387,18 +1394,11 @@ void kvm_release_page_dirty(struct page *page)
} }
EXPORT_SYMBOL_GPL(kvm_release_page_dirty); EXPORT_SYMBOL_GPL(kvm_release_page_dirty);
void kvm_release_pfn_dirty(pfn_t pfn) static void kvm_release_pfn_dirty(pfn_t pfn)
{ {
kvm_set_pfn_dirty(pfn); kvm_set_pfn_dirty(pfn);
kvm_release_pfn_clean(pfn); kvm_release_pfn_clean(pfn);
} }
EXPORT_SYMBOL_GPL(kvm_release_pfn_dirty);
void kvm_set_page_dirty(struct page *page)
{
kvm_set_pfn_dirty(page_to_pfn(page));
}
EXPORT_SYMBOL_GPL(kvm_set_page_dirty);
void kvm_set_pfn_dirty(pfn_t pfn) void kvm_set_pfn_dirty(pfn_t pfn)
{ {
...@@ -1640,8 +1640,9 @@ int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len) ...@@ -1640,8 +1640,9 @@ int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len)
} }
EXPORT_SYMBOL_GPL(kvm_clear_guest); EXPORT_SYMBOL_GPL(kvm_clear_guest);
void mark_page_dirty_in_slot(struct kvm *kvm, struct kvm_memory_slot *memslot, static void mark_page_dirty_in_slot(struct kvm *kvm,
gfn_t gfn) struct kvm_memory_slot *memslot,
gfn_t gfn)
{ {
if (memslot && memslot->dirty_bitmap) { if (memslot && memslot->dirty_bitmap) {
unsigned long rel_gfn = gfn - memslot->base_gfn; unsigned long rel_gfn = gfn - memslot->base_gfn;
...@@ -1710,14 +1711,6 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu) ...@@ -1710,14 +1711,6 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
EXPORT_SYMBOL_GPL(kvm_vcpu_kick); EXPORT_SYMBOL_GPL(kvm_vcpu_kick);
#endif /* !CONFIG_S390 */ #endif /* !CONFIG_S390 */
void kvm_resched(struct kvm_vcpu *vcpu)
{
if (!need_resched())
return;
cond_resched();
}
EXPORT_SYMBOL_GPL(kvm_resched);
bool kvm_vcpu_yield_to(struct kvm_vcpu *target) bool kvm_vcpu_yield_to(struct kvm_vcpu *target)
{ {
struct pid *pid; struct pid *pid;
...@@ -1742,7 +1735,6 @@ bool kvm_vcpu_yield_to(struct kvm_vcpu *target) ...@@ -1742,7 +1735,6 @@ bool kvm_vcpu_yield_to(struct kvm_vcpu *target)
} }
EXPORT_SYMBOL_GPL(kvm_vcpu_yield_to); EXPORT_SYMBOL_GPL(kvm_vcpu_yield_to);
#ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT
/* /*
* Helper that checks whether a VCPU is eligible for directed yield. * Helper that checks whether a VCPU is eligible for directed yield.
* Most eligible candidate to yield is decided by following heuristics: * Most eligible candidate to yield is decided by following heuristics:
...@@ -1765,8 +1757,9 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_yield_to); ...@@ -1765,8 +1757,9 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_yield_to);
* locking does not harm. It may result in trying to yield to same VCPU, fail * locking does not harm. It may result in trying to yield to same VCPU, fail
* and continue with next VCPU and so on. * and continue with next VCPU and so on.
*/ */
bool kvm_vcpu_eligible_for_directed_yield(struct kvm_vcpu *vcpu) static bool kvm_vcpu_eligible_for_directed_yield(struct kvm_vcpu *vcpu)
{ {
#ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT
bool eligible; bool eligible;
eligible = !vcpu->spin_loop.in_spin_loop || eligible = !vcpu->spin_loop.in_spin_loop ||
...@@ -1777,8 +1770,10 @@ bool kvm_vcpu_eligible_for_directed_yield(struct kvm_vcpu *vcpu) ...@@ -1777,8 +1770,10 @@ bool kvm_vcpu_eligible_for_directed_yield(struct kvm_vcpu *vcpu)
kvm_vcpu_set_dy_eligible(vcpu, !vcpu->spin_loop.dy_eligible); kvm_vcpu_set_dy_eligible(vcpu, !vcpu->spin_loop.dy_eligible);
return eligible; return eligible;
} #else
return true;
#endif #endif
}
void kvm_vcpu_on_spin(struct kvm_vcpu *me) void kvm_vcpu_on_spin(struct kvm_vcpu *me)
{ {
...@@ -2283,6 +2278,11 @@ static int kvm_ioctl_create_device(struct kvm *kvm, ...@@ -2283,6 +2278,11 @@ static int kvm_ioctl_create_device(struct kvm *kvm,
case KVM_DEV_TYPE_VFIO: case KVM_DEV_TYPE_VFIO:
ops = &kvm_vfio_ops; ops = &kvm_vfio_ops;
break; break;
#endif
#ifdef CONFIG_KVM_ARM_VGIC
case KVM_DEV_TYPE_ARM_VGIC_V2:
ops = &kvm_arm_vgic_v2_ops;
break;
#endif #endif
default: default:
return -ENODEV; return -ENODEV;
...@@ -2939,33 +2939,6 @@ int kvm_io_bus_read(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr, ...@@ -2939,33 +2939,6 @@ int kvm_io_bus_read(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
return r < 0 ? r : 0; return r < 0 ? r : 0;
} }
/* kvm_io_bus_read_cookie - called under kvm->slots_lock */
int kvm_io_bus_read_cookie(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
int len, void *val, long cookie)
{
struct kvm_io_bus *bus;
struct kvm_io_range range;
range = (struct kvm_io_range) {
.addr = addr,
.len = len,
};
bus = srcu_dereference(kvm->buses[bus_idx], &kvm->srcu);
/* First try the device referenced by cookie. */
if ((cookie >= 0) && (cookie < bus->dev_count) &&
(kvm_io_bus_cmp(&range, &bus->range[cookie]) == 0))
if (!kvm_iodevice_read(bus->range[cookie].dev, addr, len,
val))
return cookie;
/*
* cookie contained garbage; fall back to search and return the
* correct cookie value.
*/
return __kvm_io_bus_read(bus, &range, val);
}
/* Caller must hold slots_lock. */ /* Caller must hold slots_lock. */
int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr, int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
......
...@@ -101,14 +101,14 @@ static int kvm_vfio_set_group(struct kvm_device *dev, long attr, u64 arg) ...@@ -101,14 +101,14 @@ static int kvm_vfio_set_group(struct kvm_device *dev, long attr, u64 arg)
struct kvm_vfio *kv = dev->private; struct kvm_vfio *kv = dev->private;
struct vfio_group *vfio_group; struct vfio_group *vfio_group;
struct kvm_vfio_group *kvg; struct kvm_vfio_group *kvg;
void __user *argp = (void __user *)arg; int32_t __user *argp = (int32_t __user *)(unsigned long)arg;
struct fd f; struct fd f;
int32_t fd; int32_t fd;
int ret; int ret;
switch (attr) { switch (attr) {
case KVM_DEV_VFIO_GROUP_ADD: case KVM_DEV_VFIO_GROUP_ADD:
if (get_user(fd, (int32_t __user *)argp)) if (get_user(fd, argp))
return -EFAULT; return -EFAULT;
f = fdget(fd); f = fdget(fd);
...@@ -148,7 +148,7 @@ static int kvm_vfio_set_group(struct kvm_device *dev, long attr, u64 arg) ...@@ -148,7 +148,7 @@ static int kvm_vfio_set_group(struct kvm_device *dev, long attr, u64 arg)
return 0; return 0;
case KVM_DEV_VFIO_GROUP_DEL: case KVM_DEV_VFIO_GROUP_DEL:
if (get_user(fd, (int32_t __user *)argp)) if (get_user(fd, argp))
return -EFAULT; return -EFAULT;
f = fdget(fd); f = fdget(fd);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment