Commit 84331ed8 authored by Radim Krčmář's avatar Radim Krčmář Committed by Greg Kroah-Hartman

KVM: i8254: change PIT discard tick policy

commit 7dd0fdff upstream.

Discard policy uses ack_notifiers to prevent injection of PIT interrupts
before EOI from the last one.

This patch changes the policy to always try to deliver the interrupt,
which makes a difference when its vector is in ISR.
Old implementation would drop the interrupt, but proposed one injects to
IRR, like real hardware would.

The old policy breaks legacy NMI watchdogs, where PIT is used through
virtual wire (LVT0): PIT never sends an interrupt before receiving EOI,
thus a guest deadlock with disabled interrupts will stop NMIs.

Note that NMI doesn't do EOI, so PIT also had to send a normal interrupt
through IOAPIC.  (KVM's PIT is deeply rotten and luckily not used much
in modern systems.)

Even though there is a chance of regressions, I think we can fix the
LVT0 NMI bug without introducing a new tick policy.
Reported-by: default avatarYuki Shibuya <shibuya.yk@ncos.nec.co.jp>
Reviewed-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 2cff86e5
...@@ -245,7 +245,7 @@ static void kvm_pit_ack_irq(struct kvm_irq_ack_notifier *kian) ...@@ -245,7 +245,7 @@ static void kvm_pit_ack_irq(struct kvm_irq_ack_notifier *kian)
* PIC is being reset. Handle it gracefully here * PIC is being reset. Handle it gracefully here
*/ */
atomic_inc(&ps->pending); atomic_inc(&ps->pending);
else if (value > 0) else if (value > 0 && ps->reinject)
/* in this case, we had multiple outstanding pit interrupts /* in this case, we had multiple outstanding pit interrupts
* that we needed to inject. Reinject * that we needed to inject. Reinject
*/ */
...@@ -288,7 +288,9 @@ static void pit_do_work(struct kthread_work *work) ...@@ -288,7 +288,9 @@ static void pit_do_work(struct kthread_work *work)
* last one has been acked. * last one has been acked.
*/ */
spin_lock(&ps->inject_lock); spin_lock(&ps->inject_lock);
if (ps->irq_ack) { if (!ps->reinject)
inject = 1;
else if (ps->irq_ack) {
ps->irq_ack = 0; ps->irq_ack = 0;
inject = 1; inject = 1;
} }
...@@ -317,10 +319,10 @@ static enum hrtimer_restart pit_timer_fn(struct hrtimer *data) ...@@ -317,10 +319,10 @@ static enum hrtimer_restart pit_timer_fn(struct hrtimer *data)
struct kvm_kpit_state *ps = container_of(data, struct kvm_kpit_state, timer); struct kvm_kpit_state *ps = container_of(data, struct kvm_kpit_state, timer);
struct kvm_pit *pt = ps->kvm->arch.vpit; struct kvm_pit *pt = ps->kvm->arch.vpit;
if (ps->reinject || !atomic_read(&ps->pending)) { if (ps->reinject)
atomic_inc(&ps->pending); atomic_inc(&ps->pending);
queue_kthread_work(&pt->worker, &pt->expired); queue_kthread_work(&pt->worker, &pt->expired);
}
if (ps->is_periodic) { if (ps->is_periodic) {
hrtimer_add_expires_ns(&ps->timer, ps->period); hrtimer_add_expires_ns(&ps->timer, ps->period);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment