Commit 321c5658 authored by Yuki Shibuya's avatar Yuki Shibuya Committed by Paolo Bonzini

KVM: x86: Inject pending interrupt even if pending nmi exist

Non maskable interrupts (NMI) are preferred to interrupts in current
implementation. If a NMI is pending and NMI is blocked by the result
of nmi_allowed(), pending interrupt is not injected and
enable_irq_window() is not executed, even if interrupts injection is
allowed.

In old kernel (e.g. 2.6.32), schedule() is often called in NMI context.
In this case, interrupts are needed to execute iret that intends end
of NMI. The flag of blocking new NMI is not cleared until the guest
execute the iret, and interrupts are blocked by pending NMI. Due to
this, iret can't be invoked in the guest, and the guest is starved
until block is cleared by some events (e.g. canceling injection).

This patch injects pending interrupts, when it's allowed, even if NMI
is blocked. And, If an interrupts is pending after executing
inject_pending_event(), enable_irq_window() is executed regardless of
NMI pending counter.

Cc: stable@vger.kernel.org
Signed-off-by: default avatarYuki Shibuya <shibuya.yk@ncos.nec.co.jp>
Suggested-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent c26e5f30
...@@ -6095,12 +6095,10 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win) ...@@ -6095,12 +6095,10 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
} }
/* try to inject new event if pending */ /* try to inject new event if pending */
if (vcpu->arch.nmi_pending) { if (vcpu->arch.nmi_pending && kvm_x86_ops->nmi_allowed(vcpu)) {
if (kvm_x86_ops->nmi_allowed(vcpu)) { --vcpu->arch.nmi_pending;
--vcpu->arch.nmi_pending; vcpu->arch.nmi_injected = true;
vcpu->arch.nmi_injected = true; kvm_x86_ops->set_nmi(vcpu);
kvm_x86_ops->set_nmi(vcpu);
}
} else if (kvm_cpu_has_injectable_intr(vcpu)) { } else if (kvm_cpu_has_injectable_intr(vcpu)) {
/* /*
* Because interrupts can be injected asynchronously, we are * Because interrupts can be injected asynchronously, we are
...@@ -6569,10 +6567,12 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) ...@@ -6569,10 +6567,12 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
if (inject_pending_event(vcpu, req_int_win) != 0) if (inject_pending_event(vcpu, req_int_win) != 0)
req_immediate_exit = true; req_immediate_exit = true;
/* enable NMI/IRQ window open exits if needed */ /* enable NMI/IRQ window open exits if needed */
else if (vcpu->arch.nmi_pending) else {
kvm_x86_ops->enable_nmi_window(vcpu); if (vcpu->arch.nmi_pending)
else if (kvm_cpu_has_injectable_intr(vcpu) || req_int_win) kvm_x86_ops->enable_nmi_window(vcpu);
kvm_x86_ops->enable_irq_window(vcpu); if (kvm_cpu_has_injectable_intr(vcpu) || req_int_win)
kvm_x86_ops->enable_irq_window(vcpu);
}
if (kvm_lapic_enabled(vcpu)) { if (kvm_lapic_enabled(vcpu)) {
update_cr8_intercept(vcpu); update_cr8_intercept(vcpu);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment