Commit 6ce347af authored by Sean Christopherson's avatar Sean Christopherson Committed by Paolo Bonzini

KVM: nVMX: Preserve exception priority irrespective of exiting behavior

Short circuit vmx_check_nested_events() if an exception is pending and
needs to be injected into L2, priority between coincident events is not
dependent on exiting behavior.  This fixes a bug where a single-step #DB
that is not intercepted by L1 is incorrectly dropped due to servicing a
VMX Preemption Timer VM-Exit.

Injected exceptions also need to be blocked if nested VM-Enter is
pending or an exception was already injected, otherwise injecting the
exception could overwrite an existing event injection from L1.
Technically, this scenario should be impossible, i.e. KVM shouldn't
inject its own exception during nested VM-Enter.  This will be addressed
in a future patch.

Note, event priority between SMI, NMI and INTR is incorrect for L2, e.g.
SMI should take priority over VM-Exit on NMI/INTR, and NMI that is
injected into L2 should take priority over VM-Exit INTR.  This will also
be addressed in a future patch.

Fixes: b6b8a145 ("KVM: nVMX: Rework interception of IRQs and NMIs")
Reported-by: default avatarJim Mattson <jmattson@google.com>
Cc: Oliver Upton <oupton@google.com>
Cc: Peter Shier <pshier@google.com>
Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-2-sean.j.christopherson@intel.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent 9c3d370a
...@@ -3716,11 +3716,11 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu) ...@@ -3716,11 +3716,11 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
/* /*
* Process any exceptions that are not debug traps before MTF. * Process any exceptions that are not debug traps before MTF.
*/ */
if (vcpu->arch.exception.pending && if (vcpu->arch.exception.pending && !vmx_pending_dbg_trap(vcpu)) {
!vmx_pending_dbg_trap(vcpu) &&
nested_vmx_check_exception(vcpu, &exit_qual)) {
if (block_nested_events) if (block_nested_events)
return -EBUSY; return -EBUSY;
if (!nested_vmx_check_exception(vcpu, &exit_qual))
goto no_vmexit;
nested_vmx_inject_exception_vmexit(vcpu, exit_qual); nested_vmx_inject_exception_vmexit(vcpu, exit_qual);
return 0; return 0;
} }
...@@ -3733,10 +3733,11 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu) ...@@ -3733,10 +3733,11 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
return 0; return 0;
} }
if (vcpu->arch.exception.pending && if (vcpu->arch.exception.pending) {
nested_vmx_check_exception(vcpu, &exit_qual)) {
if (block_nested_events) if (block_nested_events)
return -EBUSY; return -EBUSY;
if (!nested_vmx_check_exception(vcpu, &exit_qual))
goto no_vmexit;
nested_vmx_inject_exception_vmexit(vcpu, exit_qual); nested_vmx_inject_exception_vmexit(vcpu, exit_qual);
return 0; return 0;
} }
...@@ -3771,6 +3772,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu) ...@@ -3771,6 +3772,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
return 0; return 0;
} }
no_vmexit:
vmx_complete_nested_posted_interrupt(vcpu); vmx_complete_nested_posted_interrupt(vcpu);
return 0; return 0;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment