- 06 Aug, 2015 15 commits
-
-
Andi Kleen authored
Later patches need to cheaply check that the branch mode is in ANY. Add a new function to check all event attrs and add a flag to the report state, which is then initialized. v2: Rename flag Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/1437233094-12844-3-git-send-email-andi@firstfloor.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Andi Kleen authored
cycles is a new branch_info field available on some CPUs that indicates the time deltas between branches in the LBR. Add a sort key and output code for the cycles to allow to display the basic block cycles individually in perf report. We also pass in the cycles for weight when LBRs are processed, which allows to get global and local weight, to get an estimate of the total cost. And also print the cycles information for perf report -D. I also added printing for the previously missing LBR flags (mispredict etc.) Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/1437233094-12844-2-git-send-email-andi@firstfloor.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Ben Hutchings authored
perf currently fails to build on MIPS as there is no tools/perf/arch/mips/Build file. Adding an empty file fixes this as there are no MIPS-specific sources to build. It looks like the same is needed for Alpha and PA-RISC, though I haven't been able to test those. Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Fixes: 5e8c0fb6 ("perf build: Add arch x86 objects building") Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1438704627.7315.2.camel@decadent.org.ukSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jiri Olsa authored
Moving counter processing code into stat object as perf_stat__process_counter. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1437481927-29538-8-git-send-email-jolsa@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jiri Olsa authored
Passing 'struct perf_stat_config' into process_counter(), so that we can make process_counter() non static and use it from other places. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1437481927-29538-7-git-send-email-jolsa@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jiri Olsa authored
Moving 'interval' into struct perf_stat_config. The point is to centralize the base stat config so it could be used localy together with other stat routines in other parts of perf code. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1437481927-29538-6-git-send-email-jolsa@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jiri Olsa authored
Moving 'output' into struct perf_stat_config. The point is to centralize the base stat config so it could be used localy together with other stat routines in other parts of perf code. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1437481927-29538-5-git-send-email-jolsa@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jiri Olsa authored
Moving 'scale' into struct perf_stat_config. The point is to centralize the base stat config so it could be used localy together with other stat routines in other parts of perf code. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1437481927-29538-4-git-send-email-jolsa@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jiri Olsa authored
Moving 'aggr_mode' into new struct. The point is to centralize the base stat config so it could be used localy together with other stat routines in other parts of perf code. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1437481927-29538-3-git-send-email-jolsa@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
Commit 7b6ff0bd ("perf probe ppc64le: Fixup function entry if using kallsyms lookup") adds 'struct map' into probe-event.h but not forward declares it. This patch fixes it. Signed-off-by: Wang Nan <wangnan0@huawei.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Fixes: 7b6ff0bd ("perf probe ppc64le: Fixup function entry if using kallsyms lookup") Link: http://lkml.kernel.org/n/1436445342-1402-30-git-send-email-wangnan0@huawei.com [ No need to include map.h, just forward declare 'struct map' ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
va_args alternative to eprintf(). Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/n/1436445342-1402-19-git-send-email-wangnan0@huawei.com [ split from another patch ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
By copying BPF related operation to uprobe processing path, this patch allow users attach BPF programs to uprobes like what they are already doing on kprobes. After this patch, users are allowed to use PERF_EVENT_IOC_SET_BPF on a uprobe perf event. Which make it possible to profile user space programs and kernel events together using BPF. Because of this patch, CONFIG_BPF_EVENTS should be selected by CONFIG_UPROBE_EVENT to ensure trace_call_bpf() is compiled even if KPROBE_EVENT is not set. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1435716878-189507-3-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Wang Nan authored
Commit e1abf2cc ("bpf: Fix the build on BPF_SYSCALL=y && !CONFIG_TRACING kernels, make it more configurable") updated the building condition of bpf_trace.o from CONFIG_BPF_SYSCALL to CONFIG_BPF_EVENTS, but the corresponding #ifdef controller in trace_events.h for trace_call_bpf() was not changed. Which, in theory, is incorrect. With current Kconfigs, we can create a .config with CONFIG_BPF_SYSCALL=y and CONFIG_BPF_EVENTS=n by unselecting CONFIG_KPROBE_EVENT and selecting CONFIG_BPF_SYSCALL. With these options, trace_call_bpf() will be defined as an extern function, but if anyone calls it a symbol missing error will be triggered since bpf_trace.o was not built. This patch changes the #ifdef controller for trace_call_bpf() from CONFIG_BPF_SYSCALL to CONFIG_BPF_EVENTS. I'll show its correctness: Before this patch: BPF_SYSCALL BPF_EVENTS trace_call_bpf bpf_trace.o y y normal compiled n n inline not compiled y n normal not compiled (incorrect) n y impossible (BPF_EVENTS depends on BPF_SYSCALL) After this patch: BPF_SYSCALL BPF_EVENTS trace_call_bpf bpf_trace.o y y normal compiled n n inline not compiled y n inline not compiled (fixed) n y impossible (BPF_EVENTS depends on BPF_SYSCALL) So this patch doesn't break anything. QED. Signed-off-by: Wang Nan <wangnan0@huawei.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1435716878-189507-2-git-send-email-wangnan0@huawei.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Milian Wolff authored
It is cumbersome to manually calculate the total time spent in a given syscall by multiplying the average value with the number of calls. Instead, we now do this directly inside perf trace. Note that this is also done by 'strace', which even adds a column with relative numbers - something we could do in the future. Example: perf trace -s find /some/folder > /dev/null Summary of events: find (19976), 700123 events, 100.0%, 0.000 msec syscall calls total min avg max stddev (msec) (msec) (msec) (msec) (%) --------------- -------- --------- --------- --------- --------- ------ read 4 0.006 0.001 0.002 0.003 27.42% write 8046 9.617 0.001 0.001 0.035 0.56% open 34196 40.384 0.001 0.001 0.071 0.30% close 68375 57.104 0.001 0.001 0.076 0.25% stat 4 0.004 0.001 0.001 0.001 3.14% fstat 34189 27.518 0.001 0.001 0.060 0.34% mmap 13 0.029 0.001 0.002 0.003 10.74% mprotect 6 0.018 0.002 0.003 0.005 17.04% munmap 3 0.014 0.003 0.005 0.006 24.87% brk 87 0.490 0.001 0.006 0.016 6.50% ioctl 3 0.004 0.001 0.001 0.003 36.39% access 1 0.004 0.004 0.004 0.004 0.00% uname 1 0.001 0.001 0.001 0.001 0.00% getdents 68393 143.600 0.001 0.002 0.187 0.95% fchdir 68371 56.980 0.001 0.001 0.111 0.39% arch_prctl 1 0.001 0.001 0.001 0.001 0.00% openat 34184 41.737 0.001 0.001 0.102 0.41% newfstatat 34184 41.180 0.001 0.001 0.064 0.34% Signed-off-by: Milian Wolff <milian.wolff@kdab.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> LPU-Reference: 1438853069-5902-1-git-send-email-milian.wolff@kdab.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Ingo Molnar authored
Merge tag 'perf-core-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo: New features: - Deref sys_enter pointer args with contents from probe:vfs_getname, showing pathnames instead of pointers in many syscalls in 'perf trace'. (Arnaldo Carvalho de Melo) - Make 'perf trace' write to stderr by default, just like 'strace'. (Milian Woff) Infrastructure changes: - color_vfprintf() fixes. (Andi Kleen, Jiri Olsa) - Allow enabling/disabling PERF_SAMPLE_TIME per event. (Kan Liang) - Fix build errors with mipsel-linux-uclibc compiler. (Petri Gynther) Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
- 05 Aug, 2015 11 commits
-
-
Petri Gynther authored
linux/tools$ make ARCH=mips CROSS_COMPILE=mipsel-linux- perf ... config/Makefile:256: *** No gnu/libc-version.h found, please install glibc-dev[el]. Stop. make[1]: *** [all] Error 2 make: *** [perf] Error 2 ... In file included from builtin-sched.c:13:0: util/cloexec.h:8:12: error: redundant redeclaration of ‘sched_getcpu’ [-Werror=redundant-decls] extern int sched_getcpu(void) __THROW; mipsel-buildroot-linux-uclibc/sysroot/usr/include/bits/sched.h:88:12: note: previous declaration of ‘sched_getcpu’ was here extern int sched_getcpu (void) __THROW; uclibc info: sysroot/usr/include/bits/uClibc_config.h __UCLIBC_MAJOR__ 0 __UCLIBC_MINOR__ 9 __UCLIBC_SUBLEVEL__ 33 sysroot/usr/include/features.h __UCLIBC__ 1 __GLIBC__ 2 __GLIBC_MINOR__ 2 Signed-off-by: Petri Gynther <pgynther@google.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1438735081-24131-1-git-send-email-pgynther@google.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Milian Wolff authored
Without this patch, it is cumbersome to read the trace output but ignoring the normal, potentially verbose, output of the debuggee. One common example is doing something like the following: perf trace -s find /tmp > /dev/null Without this patch, the trace summary will be lost. Now, it will still be printed at the end. This behavior is also applied by strace. Cc: Milian Wolff <milian.wolff@kdab.com> Cc: David Ahern <dsahern@gmail.com> Link: http://lkml.kernel.org/n/tip-tqnks6y2cnvm5f9g2dsfr7zl@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Andi Kleen authored
color_vprintf was including the length of the invisible escape sequences in its return argument. Don't include them to make the return value usable for indentation calculations. v2: Add comment, rebase Signed-off-by: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1438649408-20807-3-git-send-email-andi@firstfloor.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jiri Olsa authored
Seems like it's always '\n' through color_fprintf_ln, which is not used at all, removing.. ;-) Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1438649408-20807-2-git-send-email-andi@firstfloor.orgSigned-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Kan Liang authored
Pass global callchain_param into parse_callchain_record_opt and perf_evsel__config_callgraph as parameter. So we can reuse these functions to parse/config local param for callchain. Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/1438677022-34296-3-git-send-email-kan.liang@intel.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Kan Liang authored
This patchkit adds the ability to turn off time stamps per event. One usaful case for partial time is to work with per-event callgraph to enable "PEBS threshold > 1" (https://lkml.org/lkml/2015/5/10/196), which can significantly reduce the sampling overhead. The event samples with time stamps off will not be ordered. Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/1438677022-34296-2-git-send-email-kan.liang@intel.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
Those were covered and tested in this cset: access, chdir, chmod, chown, chroot, creat, getxattr, inotify_add_watch, lchown, lgetxattr, listxattr, lsetxattr, mkdir, mkdirat, mknod, rmdir, faccessat, newfstatat, openat, readlink, readlinkat, removexattr, setxattr, statfs, swapon, swapoff, truncate, unlinkat, utime, utimes, utimensat. E.g.: # trace -e statfs,access,mkdir mkdir /tmp/bla 0.285 (0.020 ms): mkdir/2799 access(filename: /etc/ld.so.preload, mode: R ) = -1 ENOENT No such file or directory 1.070 (0.032 ms): mkdir/2799 statfs(pathname: /sys/fs/selinux, buf: 0x7ffeafbdc930) = 0 1.087 (0.013 ms): mkdir/2799 statfs(pathname: /sys/fs/selinux, buf: 0x7ffeafbdc820) = 0 1.189 (0.014 ms): mkdir/2799 access(filename: /etc/selinux/config ) = 0 1.905 (0.610 ms): mkdir/2799 mkdir(pathname: /tmp/bla, mode: 511 ) = 0 # Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Milian Wolff <mail@milianw.de> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-wbqtnlktquun3wtpjdz3okul@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> and an empty message aborts the commit.
-
Arnaldo Carvalho de Melo authored
To work like strace and dereference syscall pointer args we need to insert probes (or tracepoints) right after we copy those bytes from userspace. Since we're formatting the syscall args at raw_syscalls:sys_enter time, we need to have a formatter that just stores the position where, later, when we get the probe:vfs_getname, we can insert the pointer contents. Now, if a probe:vfs_getname with this format is in place: # perf probe -l probe:vfs_getname (on getname_flags:72@/home/git/linux/fs/namei.c with pathname) That was, in this case, put in place with: # perf probe 'vfs_getname=getname_flags:72 pathname=filename:string' Added new event: probe:vfs_getname (on getname_flags:72 with pathname=filename:string) You can now use it in all perf tools, such as: perf record -e probe:vfs_getname -aR sleep 1 # Then 'perf trace' will notice that and do the pointer -> contents expansion: # trace -e open touch /tmp/bla 0.165 (0.010 ms): touch/17752 open(filename: /etc/ld.so.cache, flags: CLOEXEC) = 3 0.195 (0.011 ms): touch/17752 open(filename: /lib64/libc.so.6, flags: CLOEXEC) = 3 0.512 (0.012 ms): touch/17752 open(filename: /usr/lib/locale/locale-archive, flags: CLOEXEC) = 3 0.582 (0.012 ms): touch/17752 open(filename: /tmp/bla, flags: CREAT|NOCTTY|NONBLOCK|WRONLY, mode: 438) = 3 # Roughly equivalent to strace's output: # strace -rT -e open touch /tmp/bla 0.000000 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 <0.000039> 0.000317 open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 <0.000102> 0.001461 open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3 <0.000072> 0.000405 open("/tmp/bla", O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666) = 3 <0.000055> 0.000641 +++ exited with 0 +++ # Now we need to either look for at all syscalls that are marked as pointers and have some well known names ("filename", "pathname", etc) and set the arg formatter to the one used for the "open" syscall in this patch. This implementation works for syscalls with just a string being copied from userspace, for matching syscalls with more than one string being copied via the same probe/trace point (vfs_getname) we need to extend the vfs_getname probe spec to include the pointer too, but there are some problems with that in 'perf probe' or the kernel kprobes code, need to investigate before considering supporting multiple strings per syscall. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Milian Wolff <mail@milianw.de> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-xvuwx6nuj8cf389kf9s2ue2s@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
We were using it as a magic number, 1024, fix that. Eventually we need to stop doing it per line, and do it per arg, traversing the args at output time, to avoid the memmove() calls that will be used in the next cset to replace pointers present at raw_syscalls:sys_enter time with its contents that appear at probe:vfs_getname time, before raw_syscalls:sys_exit time. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Milian Wolff <mail@milianw.de> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-4sz3wid39egay1pp8qmbur4u@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
So that we can later decide if we will store where to expand the pathname once we are handling vfs_getname or if we should instead just go on and straight away print the pointer. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Milian Wolff <mail@milianw.de> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-ytxk5s5jpc50wahffmlxgxuw@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
We were accessing trace->syscalls.events members even when that struct wasn't initialized, i.e. --no-syscalls was specified on the command line, fix it to show that, still in debug mode, when we have an event qualifier list, i.e. when we actually are doing subset syscall tracing. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Milian Wolff <mail@milianw.de> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Fixes: 19867b61 ("perf trace: Use event filters for the event qualifier list") Link: http://lkml.kernel.org/n/tip-7980ym6vujgh3yiai0cqzc88@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
- 04 Aug, 2015 14 commits
-
-
Arnaldo Carvalho de Melo authored
The libtraceevent handler (session->tevent) is only initialized when there are tracepoints in a perf.data event list, so do not call pevent_set_function_resolve() in those cases, fixing a segfault. Reported-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-xyynkucl5p4bcs13zi4i4b1f@git.kernel.org Report-link: http://lkml.kernel.org/r/20150803174113.GA20282@krava.redhat.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Peter Zijlstra authored
Vince Weaver and Stephane Eranian reported warnings in the PEBS code when running the perf fuzzer. Stephane wrote: > I can reproduce the problem on my HSW running the fuzzer. > > I can see why this could be happening if you are mixing PEBS and non PEBS events > in the bottom 4 counters. I suspect: > for (bit = 0; bit < x86_pmu.max_pebs_events; bit++) { > if ((counts[bit] == 0) && (error[bit] == 0)) > continue; > > This test is not correct when you have non-PEBS events mixed with > PEBS events and they overflow at the same time. They will have > counts[i] != 0 but error[i] == 0, and thus you fall thru the loop > and hit the assert. Or it is something along those lines. The only way I can make this work is if ->status only has !PEBS events set, because if it has both set we'll take that slow path which masks out the !PEBS bits. After masking there are 3 options: - there is one bit set, and its @bit, we increment counts[bit]. - there are multiple bits set, we increment error[] for each set bit, we do not increment counts[]. - there are no bits set, we do nothing. The intent was to never increment counts[] for !PEBS events. Now if we start out with only a single !PEBS event set, we'll pass the test and increment counts[] for a !PEBS and hit the warn. Reported-by: Vince Weaver <vincent.weaver@maine.edu> Reported-by: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: kan.liang@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Liang, Kan authored
When disabling a PEBS event, we need to drain the buffer. Doing so requires a correct cpuc->pebs_active mask. The current code clears the pebs_active bit before draining the buffer. Fix that. Signed-off-by: "Liang, Kan" <kan.liang@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver<vincent.weaver@maine.edu> Link: http://lkml.kernel.org/r/37D7C6CF3E00A74B8858931C1DB2F07701885A65@SHSMSX103.ccr.corp.intel.com [ Fixed the SOB. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Andy Lutomirski authored
This patch adds an MSR PMU to support free running MSR counters. Such as time and freq related counters includes TSC, IA32_APERF, IA32_MPERF and IA32_PPERF, but also SMI_COUNT. The events are exposed in sysfs for use by perf stat and other tools. The files are under /sys/devices/msr/events/ Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Kan Liang <kan.liang@intel.com> [ s/freq/msr/, added SMI_COUNT, fixed bugs. ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@kernel.org Cc: adrian.hunter@intel.com Cc: dsahern@gmail.com Cc: eranian@google.com Cc: jolsa@kernel.org Cc: mark.rutland@arm.com Cc: namhyung@kernel.org Link: http://lkml.kernel.org/r/1437407346-31186-1-git-send-email-kan.liang@intel.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Kan Liang authored
The uncore subsystem for Broadwell-DE is similar to Haswell-EP. There are some differences in pci device IDs, box number and constraints. Please refer to the public document: http://www.intel.com/content/www/us/en/processors/xeon/xeon-d-1500-uncore-performance-monitoring.htmlSigned-off-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: eranian@google.com Link: http://lkml.kernel.org/r/1435839172-15114-1-git-send-email-kan.liang@intel.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Andi Kleen authored
The next patch adds a new perf extra register where 0x1ff is not a valid value. Use 0x11 instead. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1435707205-6676-3-git-send-email-andi@firstfloor.orgSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Andi Kleen authored
merge_attr() allows to merge two sysfs attribute tables. Export it to be usable by other files too. Next patch is going to use that to extend the sysfs format attributes for a CPU. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: eranian@google.com Link: http://lkml.kernel.org/r/1435612935-24425-1-git-send-email-andi@firstfloor.orgSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Andi Kleen authored
In callstack mode the LBR is not a ring buffer, but a stack that grows up and down. This means in this case we don't need to access all LBRs, only the ones up to TOS. Do this optimization for the normal LBR read, and the context switch save/restore code. For save/restore it can be done unconditionally, as it only runs when call stack mode is active. This recovers some of the cost of going to 32 LBRs on Skylake. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@kernel.org Cc: eranian@google.com Cc: jolsa@redhat.com Link: http://lkml.kernel.org/r/1432786398-23861-6-git-send-email-andi@firstfloor.orgSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Andi Kleen authored
Use the correct index to save/restore the LBR_INFO_x MSR in callstack mode. This is more a cleanup, as even with the wrong index the register was correctly saved/restored, and also LBR callgraph mode in perf tools do not really need anything in LBR_INFO. But still better to use the right index. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@kernel.org Cc: eranian@google.com Cc: jolsa@redhat.com Link: http://lkml.kernel.org/r/1432786398-23861-5-git-send-email-andi@firstfloor.orgSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Andi Kleen authored
Add perf core PMU support for future Intel Skylake CPU cores. The code is based on Haswell/Broadwell. There is a new cache event list, based on the updated Haswell event list. Skylake has removed most counter constraints on basic events, so the basic constraints table now only has a single entry (plus the fixed counters). TSX support and various other setups are all shared with Haswell. Skylake has 32 LBR entries. Add a new LBR init function to set this up. The filters are all the same as Haswell. It also has a new LBR format with a separate LBR_INFO_* MSR, but that has been already added earlier. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: eranian@google.com Link: http://lkml.kernel.org/r/1431285767-27027-7-git-send-email-andi@firstfloor.orgSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Andi Kleen authored
In Arch perfmon v4 the GLOBAL_STATUS reset automatically unfreezes LBRs. So no need to do it manually in the LBR code. Add a check to skip it. v2: Move test up to beginning of function. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: eranian@google.com Link: http://lkml.kernel.org/r/1431285767-27027-9-git-send-email-andi@firstfloor.orgSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Andi Kleen authored
With Arch Perfmon v4 the PMU ack unfreezes the LBRs. So we need to do the PMU ack after the LBR reading, otherwise the LBRs would be polluted by the PMI handler. This is a minimal change. In principle the ACK could be moved much later. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: eranian@google.com Link: http://lkml.kernel.org/r/1431285767-27027-10-git-send-email-andi@firstfloor.orgSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Andi Kleen authored
ArchPerfmon v4 has some new status bits in GLOBAL_STATUS. These need to be ignored when deciding whether a NMI was an NMI, to avoid eating all NMIs when they stay set, see: b292d7a1 ("perf/x86/intel: ignore CondChgd bit to avoid false NMI handling") This patch ignores the new ASIF bit, which indicates that SGX interfered with the PMU, and also the new LBR freezing bits, which are set when the LBRs get frozen, plus the existing CondChange (set by JTAG debuggers and some buggy BIOSes) Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: eranian@google.com Link: http://lkml.kernel.org/r/1431285767-27027-8-git-send-email-andi@firstfloor.orgSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Andi Kleen authored
Add support for the new LBRv5 format used on Intel Skylake CPUs. The flags for mispredict, abort, in_tx etc. moved to range of separate LBR_INFO_* MSRs. Teach the LBR code to read those. The original LBR registers stay the same, except they have full sign extension now. LBR_INFO also reports a cycle count to the last branch. Report the cycle information using the new "cycles" branch_info output field. In addition we have to context switch and clear the new INFO MSRs to avoid any information leaks. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: eranian@google.com Link: http://lkml.kernel.org/r/1431285767-27027-6-git-send-email-andi@firstfloor.orgSigned-off-by: Ingo Molnar <mingo@kernel.org>
-