aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc
diff options
context:
space:
mode:
authorAnton Blanchard <anton@samba.org>2011-01-17 16:17:42 +1100
committerIngo Molnar <mingo@elte.hu>2011-01-17 11:43:02 +0100
commit4bca770ede796a1ef7af9c983166d5608d9ccfaf (patch)
tree3a55c96dfb709415ee2c4e54e242c4f1a7cd01e2 /arch/powerpc
parent7c46d8da09df22361d1d43465c4f1b06cecaf25f (diff)
downloadkernel_samsung_smdk4412-4bca770ede796a1ef7af9c983166d5608d9ccfaf.zip
kernel_samsung_smdk4412-4bca770ede796a1ef7af9c983166d5608d9ccfaf.tar.gz
kernel_samsung_smdk4412-4bca770ede796a1ef7af9c983166d5608d9ccfaf.tar.bz2
powerpc: perf: Fix frequency calculation for overflowing counters
When profiling a benchmark that is almost 100% userspace, I noticed some wildly inaccurate profiles that showed almost all time spent in the kernel. Closer examination shows we were programming a tiny number of cycles into the PMU after each overflow (about ~200 away from the next overflow). This gets us stuck in a loop which we eventually break out of by throttling the PMU (there are regular throttle/unthrottle events in the log). It looks like we aren't setting event->hw.last_period to something same and the frequency to period calculations in perf are going haywire. With the following patch we find the correct period after a few interrupts and stay there. I also see no more throttle events. Signed-off-by: Anton Blanchard <anton@samba.org> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: linuxppc-dev@lists.ozlabs.org Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> LKML-Reference: <20110117161742.5feb3761@kryten> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'arch/powerpc')
-rw-r--r--arch/powerpc/kernel/perf_event.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/arch/powerpc/kernel/perf_event.c b/arch/powerpc/kernel/perf_event.c
index 5674807..ab6f6be 100644
--- a/arch/powerpc/kernel/perf_event.c
+++ b/arch/powerpc/kernel/perf_event.c
@@ -1212,6 +1212,7 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
if (left <= 0)
left = period;
record = 1;
+ event->hw.last_period = event->hw.sample_period;
}
if (left < 0x80000000LL)
val = 0x80000000LL - left;