Commit 649bccd7 authored by Like Xu's avatar Like Xu Committed by Sean Christopherson

KVM: x86/pmu: Rewrite reprogram_counters() to improve performance

A valid pmc is always tested before using pmu->reprogram_pmi. Eliminate
this part of the redundancy by setting the counter's bitmask directly,
and in addition, trigger KVM_REQ_PMU only once to save more cpu cycles.
Signed-off-by: default avatarLike Xu <likexu@tencent.com>
Link: https://lore.kernel.org/r/20230214050757.9623-4-likexu@tencent.comSigned-off-by: default avatarSean Christopherson <seanjc@google.com>
parent 8bca8c5c
...@@ -76,13 +76,13 @@ static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) ...@@ -76,13 +76,13 @@ static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx)
static void reprogram_counters(struct kvm_pmu *pmu, u64 diff) static void reprogram_counters(struct kvm_pmu *pmu, u64 diff)
{ {
int bit; int bit;
struct kvm_pmc *pmc;
for_each_set_bit(bit, (unsigned long *)&diff, X86_PMC_IDX_MAX) { if (!diff)
pmc = intel_pmc_idx_to_pmc(pmu, bit); return;
if (pmc)
kvm_pmu_request_counter_reprogam(pmc); for_each_set_bit(bit, (unsigned long *)&diff, X86_PMC_IDX_MAX)
} set_bit(bit, pmu->reprogram_pmi);
kvm_make_request(KVM_REQ_PMU, pmu_to_vcpu(pmu));
} }
static bool intel_hw_event_available(struct kvm_pmc *pmc) static bool intel_hw_event_available(struct kvm_pmc *pmc)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment