- 22 Aug, 2015 1 commit
-
-
Tudor Laurentiu authored
This was signaled by a static code analysis tool. Signed-off-by: Laurentiu Tudor <Laurentiu.Tudor@freescale.com> Reviewed-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de>
-
- 14 Aug, 2015 1 commit
-
-
Andy Lutomirski authored
VMX encodes access rights differently from LAR, and the latter is most likely what x86 people think of when they think of "access rights". Rename them to avoid confusion. Cc: kvm@vger.kernel.org Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
- 13 Aug, 2015 1 commit
-
-
Paolo Bonzini authored
Merge tag 'kvm-s390-next-20150812' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into HEAD KVM: s390: fix and feature for kvm/next (4.3) 1. error handling for irq routes 2. Gracefully handle STP time changes s390 supports a protocol for syncing different systems via the stp protocol that will steer the TOD clocks to keep all participating clocks below the round trip time between the system. In case of specific out of sync event Linux can opt-in to accept sync checks. This will result in non-monotonic jumps of the TOD clock, which Linux will correct via time offsets to keep the wall clock time monotonic. Now: KVM guests also base their time on the host TOD, so we need to fixup the offset for them as well.
-
- 11 Aug, 2015 2 commits
-
-
Wei Huang authored
According to AMD programmer's manual, AMD PERFCTRn is 64-bit MSR which, unlike Intel perf counters, doesn't require signed extension. This patch removes the unnecessary conversion in SVM vPMU code when PERFCTRn is being updated. Signed-off-by: Wei Huang <wei@redhat.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Nicholas Krause authored
This fixes error handling in the function kvm_lapic_sync_from_vapic by checking if the call to kvm_read_guest_cached has returned a error code to signal to its caller the call to this function has failed and due to this we must immediately return to the caller of kvm_lapic_sync_from_vapic to avoid incorrectly call apic_set_tpc if a error has occurred here. Signed-off-by: Nicholas Krause <xerofoify@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
- 07 Aug, 2015 1 commit
-
-
Nicholas Krause authored
This fixes the assumption that kvm_set_irq_routing is always run successfully by instead making it equal to the variable r which we use for returning in the function kvm_arch_vm_ioctl instead of making r equal to zero when calling this particular function and incorrectly making the caller of kvm_arch_vm_ioctl think the function has run successfully. Signed-off-by: Nicholas Krause <xerofoify@gmail.com> Message-Id: <1438880754-27149-1-git-send-email-xerofoify@gmail.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
-
- 05 Aug, 2015 10 commits
-
-
Xiao Guangrong authored
The logic used to check ept misconfig is completely contained in common reserved bits check for sptes, so it can be removed Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Xiao Guangrong authored
The #PF with PFEC.RSV = 1 is designed to speed MMIO emulation, however, it is possible that the RSV #PF is caused by real BUG by mis-configure shadow page table entries This patch enables full check for the zero bits on shadow page table entries (which includes not only bits reserved by the hardware, but also bits that will never be set in the SPTE), then dump the shadow page table hierarchy. Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Xiao Guangrong authored
We have the same data struct to check reserved bits on guest page tables and shadow page tables, split is_rsvd_bits_set() so that the logic can be shared between these two paths Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Xiao Guangrong authored
We have abstracted the data struct and functions which are used to check reserved bit on guest page tables, now we extend the logic to check zero bits on shadow page tables The zero bits on sptes include not only reserved bits on hardware but also the bits that SPTEs willnever use. For example, shadow pages will never use GB pages unless the guest uses them too. Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Xiao Guangrong authored
Since shadow ept page tables and Intel nested guest page tables have the same format, split reset_rsvds_bits_mask_ept so that the logic can be reused by later patches which check zero bits on sptes Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Xiao Guangrong authored
Since softmmu & AMD nested shadow page tables and guest page tables have the same format, split reset_rsvds_bits_mask so that the logic can be reused by later patches which check zero bits on sptes Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Xiao Guangrong authored
These two fields, rsvd_bits_mask and bad_mt_xwr, in "struct kvm_mmu" are used to check if reserved bits set on guest ptes, move them to a data struct so that the approach can be applied to check host shadow page table entries as well Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Xiao Guangrong authored
FNAME(is_rsvd_bits_set) does not depend on guest mmu mode, move it to mmu.c to stop being compiled multiple times Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Xiao Guangrong authored
We got the bug that qemu complained with "KVM: unknown exit, hardware reason 31" and KVM shown these info: [84245.284948] EPT: Misconfiguration. [84245.285056] EPT: GPA: 0xfeda848 [84245.285154] ept_misconfig_inspect_spte: spte 0x5eaef50107 level 4 [84245.285344] ept_misconfig_inspect_spte: spte 0x5f5fadc107 level 3 [84245.285532] ept_misconfig_inspect_spte: spte 0x5141d18107 level 2 [84245.285723] ept_misconfig_inspect_spte: spte 0x52e40dad77 level 1 This is because we got a mmio #PF and the handler see the mmio spte becomes normal (points to the ram page) However, this is valid after introducing fast mmio spte invalidation which increases the generation-number instead of zapping mmio sptes, a example is as follows: 1. QEMU drops mmio region by adding a new memslot 2. invalidate all mmio sptes 3. VCPU 0 VCPU 1 access the invalid mmio spte access the region originally was MMIO before set the spte to the normal ram map mmio #PF check the spte and see it becomes normal ram mapping !!! This patch fixes the bug just by dropping the check in mmio handler, it's good for backport. Full check will be introduced in later patches Reported-by: Pavel Shirshov <ru.pchel@gmail.com> Tested-by: Pavel Shirshov <ru.pchel@gmail.com> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Alex Williamson authored
The patch was munged on commit to re-order these tests resulting in excessive warnings when trying to do device assignment. Return to original ordering: https://lkml.org/lkml/2015/7/15/769 Fixes: 3e5d2fdc ("KVM: MTRR: simplify kvm_mtrr_get_guest_memory_type") Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Reviewed-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
- 04 Aug, 2015 1 commit
-
-
Fan Zhang authored
If the host has STP enabled, the TOD of the host will be changed during synchronization phases. These are performed during a stop_machine() call. As the guest TOD is based on the host TOD, we have to make sure that: - no VCPU is in the SIE (implicitly guaranteed via stop_machine()) - manual guest TOD calculations are not affected "Epoch" is the guest TOD clock delta to the host TOD clock. We have to adjust that value during the STP synchronization and make sure that code that accesses the epoch won't get interrupted in between (via disabling preemption). Signed-off-by: Fan Zhang <zhangfan@linux.vnet.ibm.com> Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
-
- 30 Jul, 2015 2 commits
-
-
Paolo Bonzini authored
The memory barriers are trying to protect against concurrent RCU-based interrupt injection, but the IRQ routing table is not valid at the time kvm->arch.vpic is written. Fix this by writing kvm->arch.vpic last. kvm_destroy_pic then need not set kvm->arch.vpic to NULL; modify it to take a struct kvm_pic* and reuse it if the IOAPIC creation fails. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
- 29 Jul, 2015 19 commits
-
-
Paolo Bonzini authored
There is no smp_rmb matching the smp_wmb. shared_msr_update is called from hardware_enable, which in turn is called via on_each_cpu. on_each_cpu and must imply a read memory barrier (on x86 the rmb is achieved simply through asm volatile in native_apic_mem_write). Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
This is another remnant of ia64 support. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Merge tag 'kvm-s390-next-20150728' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into kvm-next KVM: s390: Fixes and features for kvm/next (4.3) 1. Rework logging infrastructure (s390dbf) to integrate feedback learned when debugging performance and test issues 2. Some cleanups and simplifications for CMMA handling 3. Fix gdb debugging and single stepping on some instructions 4. Error handling for storage key setup
-
Christian Borntraeger authored
Depending on user space, some capabilities and vm attributes are enabled at runtime. Let's log those events and while we're at it, log querying the vm attributes as well. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
-
Christian Borntraeger authored
In addition to the per VM debug logs, let's provide a global one for KVM-wide events, like new guests or fatal errors. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
-
Christian Borntraeger authored
Use the default log level 3 for state changing and/or seldom events, use 4 for others. Also change some numbers from %x to %d and vice versa to match documentation. If hex, let's prepend the numbers with 0x. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
-
Christian Borntraeger authored
We do not use the exception logger, so the 2nd area is unused. Just have one area that is bigger (32 pages). At the same time we can limit the debug feature size to 7 longs, as the largest user has 3 parameters + string + boiler plate (vCPU, PSW mask, PSW addr) Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
-
Christian Borntraeger authored
The old Documentation/s390/kvm.txt file is either outdated or described in Documentation/virtual/kvm/api.txt. Let's get rid of it. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
-
David Hildenbrand authored
This patch adds names for missing irq types to the trace events. In order to identify adapter irqs, the define is moved from interrupt.c to the other basic irq defines in uapi/linux/kvm.h. Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
-
Christian Borntraeger authored
This reworks the debug logging for interrupt related logs. Several changes: - unify program int/irq - improve decoding (e.g. use mcic instead of parm64 for machine check injection) - remove useless interrupt type number (the name is enough) - rename "interrupt:" to "deliver:" as the other side is called "inject" - use log level 3 for state changing and/or seldom events (like machine checks, restart..) - use log level 4 for frequent events - use 0x prefix for hex numbers - add pfault done logging - move some tracing outside spinlock Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com> Reviewed-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
-
Christian Borntraeger authored
We're not only interested in the address of the control block, but also in the requested subcommand and for the token subcommand, in the specified token address and masks. Suggested-by: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com> Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
-
David Hildenbrand authored
The "from user"/"from kernel" part of the log/trace messages is not always correct anymore and therefore not really helpful. Let's remove that part from the log + trace messages. For program interrupts, we can now move the logging/tracing part into the real injection function, as already done for the other injection functions. Reviewed-by: Jens Freimann <jfrei@linux.vnet.ibm.com> Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
-
Christian Borntraeger authored
SPX (SET PREFIX) and SIGP (Set prefix) can change the prefix register of a CPU. As sigp set prefix may be handled in user space (KVM_CAP_S390_USER_SIGP), we would not log the changes triggered via SIGP in that case. Let's have just one VCPU_EVENT at the central location that tracks prefix changes. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com> Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
-
Christian Borntraeger authored
Let's add a vcpu event for the page reference handling and change the default debugging level for the ipl diagnose. Both are not frequent AND change the global state, so lets log them always. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
-
Christian Borntraeger authored
Sometimes kvm stat counters are the only performance metric to check after something went wrong. Let's add additional counters for some diagnoses. In addition do the count for diag 10 all the time, even if we inject a program interrupt. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
-
Dominik Dingel authored
There is no point in resetting the CMMA state if it was never enabled. Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com> Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
-
Dominik Dingel authored
As we already only enable CMMA when userspace requests it, we can safely move the additional checks to the request handler and avoid doing them multiple times. This also tells userspace if CMMA is available. Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com> Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
-
David Hildenbrand authored
When guest debugging is active, space-switch events might be enforced by PER. While the PER events are correctly filtered out, space-switch-events could be forwarded to the guest, although from a guest point of view, they should not have been reported. Therefore we have to filter out space-switch events being concurrently reported with a PER event, if the PER event got filtered out. To do so, we theoretically have to know which instruction was responsible for the event. As the applicable instructions modify the PSW address, the address space set in the PSW and even the address space in cr1, we can't figure out the instruction that way. For this reason, we have to rely on the information about the old and new address space, in order to guess the responsible instruction type and do appropriate checks for space-switch events. Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
-
Dominik Dingel authored
As enabling storage keys might fail, we should forward the error. Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
-
- 23 Jul, 2015 2 commits
-
-
Paolo Bonzini authored
We can disable CD unconditionally when there is no assigned device. KVM now forces guest PAT to all-writeback in that case, so it makes sense to also force CR0.CD=0. When there are assigned devices, emulate cache-disabled operation through the page tables. This behavior is consistent with VMX microcode, where CD/NW are not touched by vmentry/vmexit. However, keep this dependent on the quirk because OVMF enables the caches too late. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Mihai Donțu authored
Allow a nested hypervisor to single step its guests. Signed-off-by: Mihai Donțu <mihai.dontu@gmail.com> [Fix overlong line. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-