Commit 379a3c8e authored by Wanpeng Li's avatar Wanpeng Li Committed by Paolo Bonzini

KVM: VMX: Optimize posted-interrupt delivery for timer fastpath

While optimizing posted-interrupt delivery especially for the timer
fastpath scenario, I measured kvm_x86_ops.deliver_posted_interrupt()
to introduce substantial latency because the processor has to perform
all vmentry tasks, ack the posted interrupt notification vector,
read the posted-interrupt descriptor etc.

This is not only slow, it is also unnecessary when delivering an
interrupt to the current CPU (as is the case for the LAPIC timer) because
PIR->IRR and IRR->RVI synchronization is already performed on vmentry
Therefore skip kvm_vcpu_trigger_posted_interrupt in this case, and
instead do vmx_sync_pir_to_irr() on the EXIT_FASTPATH_REENTER_GUEST
fastpath as well.
Tested-by: default avatarHaiwei Li <lihaiwei@tencent.com>
Cc: Haiwei Li <lihaiwei@tencent.com>
Suggested-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
Signed-off-by: default avatarWanpeng Li <wanpengli@tencent.com>
Message-Id: <1588055009-12677-6-git-send-email-wanpengli@tencent.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent 404d5d7b
...@@ -3936,7 +3936,8 @@ static int vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector) ...@@ -3936,7 +3936,8 @@ static int vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector)
if (pi_test_and_set_on(&vmx->pi_desc)) if (pi_test_and_set_on(&vmx->pi_desc))
return 0; return 0;
if (!kvm_vcpu_trigger_posted_interrupt(vcpu, false)) if (vcpu != kvm_get_running_vcpu() &&
!kvm_vcpu_trigger_posted_interrupt(vcpu, false))
kvm_vcpu_kick(vcpu); kvm_vcpu_kick(vcpu);
return 0; return 0;
...@@ -6812,6 +6813,8 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu) ...@@ -6812,6 +6813,8 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
* but it would incur the cost of a retpoline for now. * but it would incur the cost of a retpoline for now.
* Revisit once static calls are available. * Revisit once static calls are available.
*/ */
if (vcpu->arch.apicv_active)
vmx_sync_pir_to_irr(vcpu);
goto reenter_guest; goto reenter_guest;
} }
exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED; exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED;
......
...@@ -4646,6 +4646,7 @@ struct kvm_vcpu *kvm_get_running_vcpu(void) ...@@ -4646,6 +4646,7 @@ struct kvm_vcpu *kvm_get_running_vcpu(void)
return vcpu; return vcpu;
} }
EXPORT_SYMBOL_GPL(kvm_get_running_vcpu);
/** /**
* kvm_get_running_vcpus - get the per-CPU array of currently running vcpus. * kvm_get_running_vcpus - get the per-CPU array of currently running vcpus.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment