1. 15 Feb, 2017 13 commits
    • Jim Mattson's avatar
      kvm: nVMX: Split VMCS checks from nested_vmx_run() · ca0bde28
      Jim Mattson authored
      The checks performed on the contents of the vmcs12 are extracted from
      nested_vmx_run so that they can be used to validate a vmcs12 that has
      been restored from a checkpoint.
      Signed-off-by: default avatarJim Mattson <jmattson@google.com>
      [Change prepare_vmcs02 and nested_vmx_load_cr3's last argument to u32,
       to match check_vmentry_postreqs.  Update comments for singlestep
       handling. - Paolo]
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      ca0bde28
    • Jim Mattson's avatar
      kvm: nVMX: Refactor nested_get_vmcs12_pages() · 6beb7bd5
      Jim Mattson authored
      Perform the checks on vmcs12 state early, but defer the gpa->hpa lookups
      until after prepare_vmcs02. Later, when we restore the checkpointed
      state of a vCPU in guest mode, we will not be able to do the gpa->hpa
      lookups when the restore is done.
      Signed-off-by: default avatarJim Mattson <jmattson@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      6beb7bd5
    • Jim Mattson's avatar
      kvm: nVMX: Refactor handle_vmptrld() · a8bc284e
      Jim Mattson authored
      Handle_vmptrld is split into two parts: the part that handles the
      VMPTRLD instruction, and the part that establishes the current VMCS
      pointer.  The latter will be used when restoring the checkpointed state
      of a vCPU that had a valid VMCS pointer when a snapshot was taken.
      Signed-off-by: default avatarJim Mattson <jmattson@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      a8bc284e
    • Jim Mattson's avatar
      kvm: nVMX: Refactor handle_vmon() · e29acc55
      Jim Mattson authored
      Handle_vmon is split into two parts: the part that handles the VMXON
      instruction, and the part that modifies the vcpu state to transition
      from legacy mode to VMX operation. The latter will be used when
      restoring the checkpointed state of a vCPU that was in VMX operation
      when a snapshot was taken.
      Signed-off-by: default avatarJim Mattson <jmattson@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      e29acc55
    • Jim Mattson's avatar
      kvm: nVMX: Prepare for checkpointing L2 state · cf8b84f4
      Jim Mattson authored
      Split prepare_vmcs12 into two parts: the part that stores the current L2
      guest state and the part that sets up the exit information fields. The
      former will be used when checkpointing the vCPU's VMX state.
      
      Modify prepare_vmcs02 so that it can construct a vmcs02 midway through
      L2 execution, using the checkpointed L2 guest state saved into the
      cached vmcs12 above.
      Signed-off-by: default avatarJim Mattson <jmattson@google.com>
      [Rebasing: add from_vmentry argument to prepare_vmcs02 instead of using
       vmx->nested.nested_run_pending, because it is no longer 1 at the
       point prepare_vmcs02 is called. - Paolo]
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      cf8b84f4
    • Paolo Bonzini's avatar
      kvm: x86: do not use KVM_REQ_EVENT for APICv interrupt injection · b95234c8
      Paolo Bonzini authored
      Since bf9f6ac8 ("KVM: Update Posted-Interrupts Descriptor when vCPU
      is blocked", 2015-09-18) the posted interrupt descriptor is checked
      unconditionally for PIR.ON.  Therefore we don't need KVM_REQ_EVENT to
      trigger the scan and, if NMIs or SMIs are not involved, we can avoid
      the complicated event injection path.
      
      Calling kvm_vcpu_kick if PIR.ON=1 is also useless, though it has been
      there since APICv was introduced.
      
      However, without the KVM_REQ_EVENT safety net KVM needs to be much
      more careful about races between vmx_deliver_posted_interrupt and
      vcpu_enter_guest.  First, the IPI for posted interrupts may be issued
      between setting vcpu->mode = IN_GUEST_MODE and disabling interrupts.
      If that happens, kvm_trigger_posted_interrupt returns true, but
      smp_kvm_posted_intr_ipi doesn't do anything about it.  The guest is
      entered with PIR.ON, but the posted interrupt IPI has not been sent
      and the interrupt is only delivered to the guest on the next vmentry
      (if any).  To fix this, disable interrupts before setting vcpu->mode.
      This ensures that the IPI is delayed until the guest enters non-root mode;
      it is then trapped by the processor causing the interrupt to be injected.
      
      Second, the IPI may be issued between kvm_x86_ops->sync_pir_to_irr(vcpu)
      and vcpu->mode = IN_GUEST_MODE.  In this case, kvm_vcpu_kick is called
      but it (correctly) doesn't do anything because it sees vcpu->mode ==
      OUTSIDE_GUEST_MODE.  Again, the guest is entered with PIR.ON but no
      posted interrupt IPI is pending; this time, the fix for this is to move
      the RVI update after IN_GUEST_MODE.
      
      Both issues were mostly masked by the liberal usage of KVM_REQ_EVENT,
      though the second could actually happen with VT-d posted interrupts.
      In both race scenarios KVM_REQ_EVENT would cancel guest entry, resulting
      in another vmentry which would inject the interrupt.
      
      This saves about 300 cycles on the self_ipi_* tests of vmexit.flat.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      b95234c8
    • Paolo Bonzini's avatar
      KVM: x86: do not scan IRR twice on APICv vmentry · 76dfafd5
      Paolo Bonzini authored
      Calls to apic_find_highest_irr are scanning IRR twice, once
      in vmx_sync_pir_from_irr and once in apic_search_irr.  Change
      sync_pir_from_irr to get the new maximum IRR from kvm_apic_update_irr;
      now that it does the computation, it can also do the RVI write.
      
      In order to avoid complications in svm.c, make the callback optional.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      76dfafd5
    • Paolo Bonzini's avatar
    • Paolo Bonzini's avatar
      KVM: x86: preparatory changes for APICv cleanups · 810e6def
      Paolo Bonzini authored
      Add return value to __kvm_apic_update_irr/kvm_apic_update_irr.
      Move vmx_sync_pir_to_irr around.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      810e6def
    • Paolo Bonzini's avatar
      kvm: nVMX: move nested events check to kvm_vcpu_running · 0ad3bed6
      Paolo Bonzini authored
      vcpu_run calls kvm_vcpu_running, not kvm_arch_vcpu_runnable,
      and the former does not call check_nested_events.
      
      Once KVM_REQ_EVENT is removed from the APICv interrupt injection
      path, however, this would leave no place to trigger a vmexit
      from L2 to L1, causing a missed interrupt delivery while in guest
      mode.  This is caught by the "ack interrupt on exit" test in
      vmx.flat.
      
      [This does not change the calls to check_nested_events in
       inject_pending_event.  That is material for a separate cleanup.]
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      0ad3bed6
    • Paolo Bonzini's avatar
      KVM: vmx: clear pending interrupts on KVM_SET_LAPIC · 967235d3
      Paolo Bonzini authored
      Pending interrupts might be in the PI descriptor when the
      LAPIC is restored from an external state; we do not want
      them to be injected.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      967235d3
    • Paolo Bonzini's avatar
      kvm: vmx: Use the hardware provided GPA instead of page walk · db1c056c
      Paolo Bonzini authored
      As in the SVM patch, the guest physical address is passed by
      VMX to x86_emulate_instruction already, so mark the GPA as available
      in vcpu->arch.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      db1c056c
    • Paolo Bonzini's avatar
      Merge branch 'kvm-ppc-next' of... · ee106891
      Paolo Bonzini authored
      Merge branch 'kvm-ppc-next' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc into HEAD
      
      This brings in two fixes for potential host crashes, from Ben
      Herrenschmidt and Nick Piggin.
      ee106891
  2. 09 Feb, 2017 3 commits
  3. 08 Feb, 2017 15 commits
  4. 07 Feb, 2017 8 commits
  5. 06 Feb, 2017 1 commit
    • Christian Borntraeger's avatar
      KVM: s390: detect some program check loops · fb7dc1d4
      Christian Borntraeger authored
      Sometimes (e.g. early boot) a guest is broken in such ways that it loops
      100% delivering operation exceptions (illegal operation) but the pgm new
      PSW is not set properly. This will result in code being read from
      address zero, which usually contains another illegal op. Let's detect
      this case and return to userspace. Instead of only detecting
      this for address zero apply a heuristic that will work for any program
      check new psw.
      We do not want guest problem state to be able to trigger a guest panic,
      e.g. by faulting on an address that is the same as the program check
      new PSW, so we check for the problem state bit being off.
      
      With proper handling in userspace we
      a: get rid of CPU consumption of such broken guests
      b: keep the program old PSW. This allows to find out the original illegal
         operation - making debugging such early boot issues much easier than
         with single stepping
      Signed-off-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Reviewed-by: default avatarCornelia Huck <cornelia.huck@de.ibm.com>
      fb7dc1d4