Commit d2517a49 authored by Ingo Molnar's avatar Ingo Molnar

perf_counter, x86: fix zero irq_period counters

The quirk to irq_period unearthed an unrobustness we had in the
hw_counter initialization sequence: we left irq_period at 0, which
was then quirked up to 2 ... which then generated a _lot_ of
interrupts during 'perf stat' runs, slowed them down and skewed
the counter results in general.

Initialize irq_period to the maximum instead.

[ Impact: fix perf stat results ]

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent 0203026b
...@@ -286,6 +286,9 @@ static int __hw_perf_counter_init(struct perf_counter *counter) ...@@ -286,6 +286,9 @@ static int __hw_perf_counter_init(struct perf_counter *counter)
hwc->nmi = 1; hwc->nmi = 1;
} }
if (!hwc->irq_period)
hwc->irq_period = x86_pmu.max_period;
atomic64_set(&hwc->period_left, atomic64_set(&hwc->period_left,
min(x86_pmu.max_period, hwc->irq_period)); min(x86_pmu.max_period, hwc->irq_period));
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment