Commit cb61de2f authored by Sean Christopherson's avatar Sean Christopherson Committed by Paolo Bonzini

KVM: nVMX: do not skip VMEnter instruction that succeeds

A successful VMEnter is essentially a fancy indirect branch that
pulls the target RIP from the VMCS.  Skipping the instruction is
unnecessary (RIP will get overwritten by the VMExit handler) and
is problematic because it can incorrectly suppress a #DB due to
EFLAGS.TF when a VMFail is detected by hardware (happens after we
skip the instruction).

Now that vmx_nested_run() is not prematurely skipping the instr,
use the full kvm_skip_emulated_instruction() in the VMFail path
of nested_vmx_vmexit().  We also need to explicitly update the
GUEST_INTERRUPTIBILITY_INFO when loading vmcs12 host state.
Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: default avatarJim Mattson <jmattson@google.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent 16fb9a46
...@@ -12855,15 +12855,6 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch) ...@@ -12855,15 +12855,6 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
goto out; goto out;
} }
/*
* After this point, the trap flag no longer triggers a singlestep trap
* on the vm entry instructions; don't call kvm_skip_emulated_instruction.
* This is not 100% correct; for performance reasons, we delegate most
* of the checks on host state to the processor. If those fail,
* the singlestep trap is missed.
*/
skip_emulated_instruction(vcpu);
/* /*
* We're finally done with prerequisite checking, and can start with * We're finally done with prerequisite checking, and can start with
* the nested entry. * the nested entry.
...@@ -13243,6 +13234,8 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu, ...@@ -13243,6 +13234,8 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
kvm_register_write(vcpu, VCPU_REGS_RSP, vmcs12->host_rsp); kvm_register_write(vcpu, VCPU_REGS_RSP, vmcs12->host_rsp);
kvm_register_write(vcpu, VCPU_REGS_RIP, vmcs12->host_rip); kvm_register_write(vcpu, VCPU_REGS_RIP, vmcs12->host_rip);
vmx_set_rflags(vcpu, X86_EFLAGS_FIXED); vmx_set_rflags(vcpu, X86_EFLAGS_FIXED);
vmx_set_interrupt_shadow(vcpu, 0);
/* /*
* Note that calling vmx_set_cr0 is important, even if cr0 hasn't * Note that calling vmx_set_cr0 is important, even if cr0 hasn't
* actually changed, because vmx_set_cr0 refers to efer set above. * actually changed, because vmx_set_cr0 refers to efer set above.
...@@ -13636,10 +13629,12 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, ...@@ -13636,10 +13629,12 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
* in L1 which thinks it just finished a VMLAUNCH or * in L1 which thinks it just finished a VMLAUNCH or
* VMRESUME instruction, so we need to set the failure * VMRESUME instruction, so we need to set the failure
* flag and the VM-instruction error field of the VMCS * flag and the VM-instruction error field of the VMCS
* accordingly. * accordingly, and skip the emulated instruction.
*/ */
nested_vmx_failValid(vcpu, VMXERR_ENTRY_INVALID_CONTROL_FIELD); nested_vmx_failValid(vcpu, VMXERR_ENTRY_INVALID_CONTROL_FIELD);
kvm_skip_emulated_instruction(vcpu);
/* /*
* Restore L1's host state to KVM's software model. We're here * Restore L1's host state to KVM's software model. We're here
* because a consistency check was caught by hardware, which * because a consistency check was caught by hardware, which
...@@ -13648,12 +13643,6 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, ...@@ -13648,12 +13643,6 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
*/ */
nested_vmx_restore_host_state(vcpu); nested_vmx_restore_host_state(vcpu);
/*
* The emulated instruction was already skipped in
* nested_vmx_run, but the updated RIP was never
* written back to the vmcs01.
*/
skip_emulated_instruction(vcpu);
vmx->fail = 0; vmx->fail = 0;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment