- 12 Apr, 2017 28 commits
-
-
Ladi Prosek authored
Hyper-V writes 0x800000000000 to MSR_AMD64_DC_CFG when running on AMD CPUs as recommended in erratum 383, analogous to our svm_init_erratum_383. By ignoring the MSR, this patch enables running Hyper-V in L1 on AMD. Signed-off-by: Ladi Prosek <lprosek@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Wanpeng Li authored
virt_xxx memory barriers are implemented trivially using the low-level __smp_xxx macros, __smp_xxx is equal to a compiler barrier for strong TSO memory model, however, mandatory barriers will unconditional add memory barriers, this patch replaces the rmb() in kvm_steal_clock() by virt_rmb(). Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Denis Plotnikov authored
VCPU TSC synchronization is perfromed in kvm_write_tsc() when the TSC value being set is within 1 second from the expected, as obtained by extrapolating of the TSC in already synchronized VCPUs. This is naturally achieved on all VCPUs at VM start and resume; however on VCPU hotplug it is not: the newly added VCPU is created with TSC == 0 while others are well ahead. To compensate for that, consider host-initiated kvm_write_tsc() with TSC == 0 a special case requiring synchronization regardless of the current TSC on other VCPUs. Signed-off-by: Denis Plotnikov <dplotnikov@virtuozzo.com> Reviewed-by: Roman Kagan <rkagan@virtuozzo.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Denis Plotnikov authored
Reuse existing code instead of using inline asm. Make the code more concise and clear in the TSC synchronization part. Signed-off-by: Denis Plotnikov <dplotnikov@virtuozzo.com> Reviewed-by: Roman Kagan <rkagan@virtuozzo.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
Although the current check is not wrong, this check explicitly includes the pic. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
Now it looks almost as picdev_write(). Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
We already have the exact same checks a couple of lines below. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
Not used outside of i8259.c, so let's make it static. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
We can easily compact this code and get rid of one local variable. Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
Let's drop the goto and return the error value directly. Suggested-by: Peter Xu <peterx@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
No need for the goto label + local variable "r". Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
Let's rename it into a proper arch specific callback. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
We know there is an ioapic, so let's call it directly. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
kvm_ioapic_init() is guaranteed to be called without any created VCPUs, so doing an all-vcpu request results in a NOP. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
Currently, one could set pin 8-15, implicitly referring to KVM_IRQCHIP_PIC_SLAVE. Get rid of the two local variables max_pin and delta on the way. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
Let's just move it to the place where it is actually needed. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
I don't see any reason any more for this lock, seemed to be used to protect removal of kvm->arch.vpic / kvm->arch.vioapic when already partially inititalized, now access is properly protected using kvm->arch.irqchip_mode and this shouldn't be necessary anymore. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
When handling KVM_GET_IRQCHIP, we already check irqchip_kernel(), which implies a fully inititalized ioapic. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
Let's just use kvm->arch.vioapic directly. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
It seemed like a nice idea to encapsulate access to kvm->arch.vpic. But as the usage is already mixed, internal locks are taken outside of i8259.c and grepping for "vpic" only is much easier, let's just get rid of pic_irqchip(). Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
KVM_IRQCHIP_KERNEL implies a fully inititalized ioapic, while kvm->arch.vioapic might temporarily be set but invalidated again if e.g. setting of default routing fails when setting KVM_CREATE_IRQCHIP. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
Let's avoid checking against kvm->arch.vpic. We have kvm->arch.irqchip_mode for that now. KVM_IRQCHIP_KERNEL implies a fully inititalized pic, while kvm->arch.vpic might temporarily be set but invalidated again if e.g. kvm_ioapic_init() fails when setting KVM_CREATE_IRQCHIP. Although current users seem to be fine, this avoids future bugs. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
Let's replace the checks for pic_in_kernel() and ioapic_in_kernel() by checks against irqchip_mode. Also make sure that creation of any route is only possible if we have an lapic in kernel (irqchip_in_kernel()) or if we are currently inititalizing the irqchip. This is necessary to switch pic_in_kernel() and ioapic_in_kernel() to irqchip_mode, too. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
Let's add a new mode and set it while we create the irqchip via KVM_CREATE_IRQCHIP and KVM_CAP_SPLIT_IRQCHIP. This mode will be used later to test if adding routes (in kvm_set_routing_entry()) is already allowed. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
Avoid races between KVM_SET_GSI_ROUTING and KVM_CREATE_IRQCHIP by taking the kvm->lock when setting up routes. If KVM_CREATE_IRQCHIP fails, KVM_SET_GSI_ROUTING could have already set up routes pointing at pic/ioapic, being silently removed already. Also, as a side effect, this patch makes sure that KVM_SET_GSI_ROUTING and KVM_CAP_SPLIT_IRQCHIP cannot run in parallel. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
- 11 Apr, 2017 1 commit
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linuxRadim Krčmář authored
From: Christian Borntraeger <borntraeger@de.ibm.com> KVM: s390: features for 4.12 1. guarded storage support for guests This contains an s390 base Linux feature branch that is necessary to implement the KVM part 2. Provide an interface to implement adapter interruption suppression which is necessary for proper zPCI support 3. Use more defines instead of numbers 4. Provide logging for lazy enablement of runtime instrumentation
-
- 07 Apr, 2017 11 commits
-
-
Jim Mattson authored
The userspace exception injection API and code path are entirely unprepared for exceptions that might cause a VM-exit from L2 to L1, so the best course of action may be to simply disallow this for now. 1. The API provides no mechanism for userspace to specify the new DR6 bits for a #DB exception or the new CR2 value for a #PF exception. Presumably, userspace is expected to modify these registers directly with KVM_SET_SREGS before the next KVM_RUN ioctl. However, in the event that L1 intercepts the exception, these registers should not be changed. Instead, the new values should be provided in the exit_qualification field of vmcs12 (Intel SDM vol 3, section 27.1). 2. In the case of a userspace-injected #DB, inject_pending_event() clears DR7.GD before calling vmx_queue_exception(). However, in the event that L1 intercepts the exception, this is too early, because DR7.GD should not be modified by a #DB that causes a VM-exit directly (Intel SDM vol 3, section 27.1). 3. If the injected exception is a #PF, nested_vmx_check_exception() doesn't properly check whether or not L1 is interested in the associated error code (using the #PF error code mask and match fields from vmcs12). It may either return 0 when it should call nested_vmx_vmexit() or vice versa. 4. nested_vmx_check_exception() assumes that it is dealing with a hardware-generated exception intercept from L2, with some of the relevant details (the VM-exit interruption-information and the exit qualification) live in vmcs02. For userspace-injected exceptions, this is not the case. 5. prepare_vmcs12() assumes that when its exit_intr_info argument specifies valid information with a valid error code that it can VMREAD the VM-exit interruption error code from vmcs02. For userspace-injected exceptions, this is not the case. Signed-off-by: Jim Mattson <jmattson@google.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
David Hildenbrand authored
If we already entered/are about to enter SMM, don't allow switching to INIT/SIPI_RECEIVED, otherwise the next call to kvm_apic_accept_events() will report a warning. Same applies if we are already in MP state INIT_RECEIVED and SMM is requested to be turned on. Refuse to set the VCPU events in this case. Fixes: cd7764fe ("KVM: x86: latch INITs while in system management mode") Cc: stable@vger.kernel.org # 4.2+ Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Paolo Bonzini authored
Its value has never changed; we might as well make it part of the ABI instead of using the return value of KVM_CHECK_EXTENSION(KVM_CAP_COALESCED_MMIO). Because PPC does not always make MMIO available, the code has to be made dependent on CONFIG_KVM_MMIO rather than KVM_COALESCED_MMIO_PAGE_OFFSET. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Paolo Bonzini authored
Remove code from architecture files that can be moved to virt/kvm, since there is already common code for coalesced MMIO. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> [Removed a pointless 'break' after 'return'.] Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Paolo Bonzini authored
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Paolo Bonzini authored
In order to simplify adding exit reasons in the future, the array of exit reason names is now also sorted by exit reason code. Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Paolo Bonzini authored
Now use bit 6 of EPTP to optionally enable A/D bits for EPTP. Another thing to change is that, when EPT accessed and dirty bits are not in use, VMX treats accesses to guest paging structures as data reads. When they are in use (bit 6 of EPTP is set), they are treated as writes and the corresponding EPT dirty bit is set. The MMU didn't know this detail, so this patch adds it. We also have to fix up the exit qualification. It may be wrong because KVM sets bit 6 but the guest might not. L1 emulates EPT A/D bits using write permissions, so in principle it may be possible for EPT A/D bits to be used by L1 even though not available in hardware. The problem is that guest page-table walks will be treated as reads rather than writes, so they would not cause an EPT violation. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> [Fixed typo in walk_addr_generic() comment and changed bit clear + conditional-set pattern in handle_ept_violation() to conditional-clear] Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Paolo Bonzini authored
This prepares the MMU paging code for EPT accessed and dirty bits, which can be enabled optionally at runtime. Code that updates the accessed and dirty bits will need a pointer to the struct kvm_mmu. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Paolo Bonzini authored
handle_ept_violation is checking for "guest-linear-address invalid" + "not a paging-structure walk". However, _all_ EPT violations without a valid guest linear address are paging structure walks, because those EPT violations happen when loading the guest PDPTEs. Therefore, the check can never be true, and even if it were, KVM doesn't care about the guest linear address; it only uses the guest *physical* address VMCS field. So, remove the check altogether. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Jim Mattson <jmattson@google.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
Paolo Bonzini authored
Large pages at the PDPE level can be emulated by the MMU, so the bit can be set unconditionally in the EPT capabilities MSR. The same is true of 2MB EPT pages, though all Intel processors with EPT in practice support those. Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Legacy device assignment has been deprecated since 4.2 (released 1.5 years ago). VFIO is better and everyone should have switched to it. If they haven't, this should convince them. :) Reviewed-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-