- 11 Mar, 2019 10 commits
-
-
Jiri Olsa authored
Add perf_data__update_dir() to update the size for every file within the perf.data directory. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/20190308134745.5057-4-jolsa@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jiri Olsa authored
We can't store the auxtrace index when we store into multiple files, because we keep only offset for it, not the file. The auxtrace data will be processed correctly in the 'pipe' mode. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/20190308134745.5057-3-jolsa@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jiri Olsa authored
The caller needs to set 'struct perf_data::is_dir flag and the path will be treated as a directory. The 'struct perf_data::file' is initialized and open as 'path/header' file. Add a check to the direcory interface functions to check the is_dir flag. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/20190308134745.5057-2-jolsa@kernel.org [ Be consistent on how to signal failure, i.e. use -1 and let users check errno ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Martin Liška authored
Thi patch adds PMC events for AMD Family 17 CPUs as defined in [1]. It covers events described in section: 2.1.13. Regex pattern in mapfile.csv covers all CPUs of the family. [1] https://support.amd.com/TechDocs/54945_PPR_Family_17h_Models_00h-0Fh.pdfSigned-off-by: Martin Liška <mliska@suse.cz> Acked-by: Borislav Petkov <bp@suse.de> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Jon Grimm <jon.grimm@amd.com> Cc: Martin Jambor <mjambor@suse.cz> Cc: William Cohen <wcohen@redhat.com> Link: https://lkml.kernel.org/r/d65873ca-e402-b198-4fe9-8c4af81258c8@suse.czSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Adrian Hunter authored
Since commit 4d99e413 ("perf machine: Workaround missing maps for x86 PTI entry trampolines"), perf tools has been creating more than one kernel map, however 'perf probe' assumed there could be only one. Fix by using machine__kernel_map() to get the main kernel map. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Joseph Qi <joseph.qi@linux.alibaba.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Jiufei Xue <jiufei.xue@linux.alibaba.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Cc: Xu Yu <xuyu@linux.alibaba.com> Fixes: 4d99e413 ("perf machine: Workaround missing maps for x86 PTI entry trampolines") Fixes: d83212d5 ("kallsyms, x86: Export addresses of PTI entry trampolines") Link: http://lkml.kernel.org/r/2ed432de-e904-85d2-5c36-5897ddc5b23b@intel.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Andi Kleen authored
Many workloads change over time. 'perf report' currently aggregates the whole time range reported in perf.data. This patch adds an option for a time quantum to quantisize the perf.data over time. This just adds the option, will be used in follow on patches for a time sort key. Signed-off-by: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/20190305144758.12397-6-andi@firstfloor.org [ Use NSEC_PER_[MU]SEC ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Andi Kleen authored
Add a utility function to print nanosecond timestamps. Signed-off-by: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/20190305144758.12397-11-andi@firstfloor.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Andi Kleen authored
Upcoming changes add timestamp output in perf report. Add a --ns argument similar to perf script to support nanoseconds resolution when needed. Signed-off-by: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/20190305144758.12397-5-andi@firstfloor.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Andi Kleen authored
perf script -F +insn was only working for PT traces because the PT instruction decoder was filling in the insn/insn_len sample attributes. Support it for non PT samples too on x86 using the existing x86 instruction decoder. This adds some extra checking to ensure that we don't try to decode instructions when using perf.data from a different architecture. % perf record -a sleep 1 % perf script -F ip,sym,insn --xed ffffffff811704c9 remote_function movl %eax, 0x18(%rbx) ffffffff8100bb50 intel_bts_enable_local retq ffffffff81048612 native_apic_mem_write movl %esi, -0xa04000(%rdi) ffffffff81048612 native_apic_mem_write movl %esi, -0xa04000(%rdi) ffffffff81048612 native_apic_mem_write movl %esi, -0xa04000(%rdi) ffffffff810f1f79 generic_exec_single xor %eax, %eax ffffffff811704c9 remote_function movl %eax, 0x18(%rbx) ffffffff8100bb34 intel_bts_enable_local movl 0x2000(%rax), %edx ffffffff81048610 native_apic_mem_write mov %edi, %edi ... Committer testing: Before: # perf script -F ip,sym,insn --xed | head -5 ffffffffa4068804 native_write_msr addb %al, (%rax) ffffffffa4068804 native_write_msr addb %al, (%rax) ffffffffa4068804 native_write_msr addb %al, (%rax) ffffffffa4068806 native_write_msr addb %al, (%rax) ffffffffa4068806 native_write_msr addb %al, (%rax) # perf script -F ip,sym,insn --xed | grep -v "addb %al, (%rax)" # After: # perf script -F ip,sym,insn --xed | head -5 ffffffffa4068804 native_write_msr wrmsr ffffffffa4068804 native_write_msr wrmsr ffffffffa4068804 native_write_msr wrmsr ffffffffa4068806 native_write_msr nopl %eax, (%rax,%rax,1) ffffffffa4068806 native_write_msr nopl %eax, (%rax,%rax,1) # perf script -F ip,sym,insn --xed | grep -v "addb %al, (%rax)" | head -5 ffffffffa4068804 native_write_msr wrmsr ffffffffa4068804 native_write_msr wrmsr ffffffffa4068804 native_write_msr wrmsr ffffffffa4068806 native_write_msr nopl %eax, (%rax,%rax,1) ffffffffa4068806 native_write_msr nopl %eax, (%rax,%rax,1) # More examples: # perf script -F ip,sym,insn --xed | grep -v native_write_msr | head ffffffffa416b90e tick_check_broadcast_expired btq %rax, 0x1a5f42a(%rip) ffffffffa4956bd0 nmi_cpu_backtrace pushq %r13 ffffffffa415b95e __hrtimer_next_event_base movq 0x18(%rax), %rdx ffffffffa4956bf3 nmi_cpu_backtrace popq %r12 ffffffffa4171d5c smp_call_function_single pause ffffffffa4956bdd nmi_cpu_backtrace mov %ebp, %r12d ffffffffa4797e4d menu_select cmp $0x190, %rax ffffffffa4171d5c smp_call_function_single pause ffffffffa405a7d8 nmi_cpu_backtrace_handler callq 0xffffffffa4956bd0 ffffffffa4797f7a menu_select shr $0x3, %rax # Which matches the annotate output modulo resolving callqs: # perf annotate --stdio2 nmi_cpu_backtrace_handler Samples: 4 of event 'cycles:ppp', 4000 Hz, Event count (approx.): 35908, [percent: local period] nmi_cpu_backtrace_handler() /lib/modules/5.0.0+/build/vmlinux Percent Disassembly of section .text: ffffffff8105a7d0 <nmi_cpu_backtrace_handler>: nmi_cpu_backtrace_handler(): nmi_trigger_cpumask_backtrace(mask, exclude_self, nmi_raise_cpu_backtrace); } static int nmi_cpu_backtrace_handler(unsigned int cmd, struct pt_regs *regs) { 24.45 → callq __fentry__ if (nmi_cpu_backtrace(regs)) mov %rsi,%rdi 75.55 → callq nmi_cpu_backtrace return NMI_HANDLED; movzbl %al,%eax return NMI_DONE; } ← retq # # perf annotate --stdio2 __hrtimer_next_event_base Samples: 4 of event 'cycles:ppp', 4000 Hz, Event count (approx.): 767977, [percent: local period] __hrtimer_next_event_base() /lib/modules/5.0.0+/build/vmlinux Percent Disassembly of section .text: ffffffff8115b910 <__hrtimer_next_event_base>: __hrtimer_next_event_base(): static ktime_t __hrtimer_next_event_base(struct hrtimer_cpu_base *cpu_base, const struct hrtimer *exclude, unsigned int active, ktime_t expires_next) { → callq __fentry__ <SNIP> 4a: add $0x1,%r14 77.31 mov 0x18(%rax),%rdx shl $0x6,%r14 sub 0x38(%rbx,%r14,1),%rdx if (expires < expires_next) { cmp %r12,%rdx ↓ jge 68 <SNIP> Signed-off-by: Andi Kleen <ak@linux.intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/20190305144758.12397-3-andi@firstfloor.org [ Converted fetch_exe() to use the name it ended up having when merged: thread__memcpy() ] [ archinsn.c needs the instruction decoder that is only build when CONFIG_AUXTRACE=y, fix that ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Stephane Eranian authored
On mmap(), perf_events generates a RECORD_MMAP record and then checks which events are interested in this record. There are currently 2 versions of mmap records: RECORD_MMAP and RECORD_MMAP2. MMAP2 is larger. The event configuration controls which version the user level tool accepts. If the event->attr.mmap2=1 field then MMAP2 record is returned. The perf_event_mmap_output() takes care of this. It checks attr->mmap2 and corrects the record fields before putting it in the sampling buffer of the event. At the end the function restores the modified MMAP record fields. The problem is that the function restores the size but not the type. Thus, if a subsequent event only accepts MMAP type, then it would instead receive an MMAP2 record with a size of MMAP record. This patch fixes the problem by restoring the record type on exit. Signed-off-by: Stephane Eranian <eranian@google.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@linux.intel.com> Fixes: 13d7a241 ("perf: Add attr->mmap2 attribute to an event") Link: http://lkml.kernel.org/r/20190307185233.225521-1-eranian@google.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
- 09 Mar, 2019 4 commits
-
-
Ingo Molnar authored
Merge tag 'perf-core-for-mingo-5.1-20190307' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent Pull perf/core changes from Arnaldo Carvalho de Melo: perf bpf: Arnaldo Carvalho de Melo: - Automatically add BTF ELF markers to 'perf trace' BPF programs, so that tools such as 'bpftool map dump' can pretty print map keys and values. perf c2c: Jiri Olsa: - Fix report for empty NUMA node. perf diff: Jin Yao: - Support --time, --cpu, --pid and --tid filter options. perf probe: Arnaldo Carvalho de Melo: - Clarify error message about not finding kernel modules debuginfo. perf record: Jiri Olsa: - Fixup probing for max attr.precise_ip. perf trace: Arnaldo Carvalho de Melo: - Add missing %s lost in the 'msg_flags' recvmmsg arg when adding prefix suppression logic. perf annotate: Arnaldo Carvalho de Melo: - Calculate the max instruction name, align column to that, removing the hardcoded max 6 chars and cope with instructions with names longer than that, such as vpmovmskb, vpcmpeqb, etc. kernel: Song Liu: - Consider events with attr.bpf_event set as side-band. Gustavo A. R. Silva: - Mark expected switch fall-through in perf_event_parse_addr_filter(). Libraries: Jiri Olsa: - Fix leaks and double frees on error paths. libtraceevent: Tony Jones: - Fix buffer overflow in arg_eval(). python scripting: Tony Jones: - More python3 fixes. Trivial: Yang Wei: - Remove needless extra semicolon in clang C++ glue code. Intel PT/BTS: Adrian Hunter: - Improve auxtrace address filter error message when there is no DSO. - Fix divide by zero when TSC is not available. - Further improvements to the export to sqlite/posgresql python scripts and to the GUI sqlviewer, exporting 'parent_id' so that we have enable the creation of call trees. Andi Kleen: - Generalize function to copy from thread addr space from intel-bts code. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Gustavo A. R. Silva authored
In preparation to enabling -Wimplicit-fallthrough, mark switch cases where we are expecting to fall through. This patch fixes the following warning: kernel/events/core.c: In function ‘perf_event_parse_addr_filter’: kernel/events/core.c:9154:11: warning: this statement may fall through [-Wimplicit-fallthrough=] kernel = 1; ~~~~~~~^~~ kernel/events/core.c:9156:3: note: here case IF_SRC_FILEADDR: ^~~~ Warning level 3 was used: -Wimplicit-fallthrough=3 This patch is part of the ongoing efforts to enable -Wimplicit-fallthrough. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Link: https://lkml.kernel.org/r/20190212205430.GA8446@embeddedorSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Kan Liang authored
The client IMC bandwidth events currently return very large values: $ perf stat -e uncore_imc/data_reads/ -e uncore_imc/data_writes/ -I 10000 -a 10.000117222 34,788.76 MiB uncore_imc/data_reads/ 10.000117222 8.26 MiB uncore_imc/data_writes/ 20.000374584 34,842.89 MiB uncore_imc/data_reads/ 20.000374584 10.45 MiB uncore_imc/data_writes/ 30.000633299 37,965.29 MiB uncore_imc/data_reads/ 30.000633299 323.62 MiB uncore_imc/data_writes/ 40.000891548 41,012.88 MiB uncore_imc/data_reads/ 40.000891548 6.98 MiB uncore_imc/data_writes/ 50.001142480 1,125,899,906,621,494.75 MiB uncore_imc/data_reads/ 50.001142480 6.97 MiB uncore_imc/data_writes/ The client IMC events are freerunning counters. They still use the old event encoding format (0x1 for data_read and 0x2 for data write). The counter bit width is calculated by common code, which assume that the standard encoding format is used for the freerunning counters. Error bit width information is calculated. The patch intends to convert the old client IMC event encoding to the standard encoding format. Current common code uses event->attr.config which directly copy from user space. We should not implicitly modify it for a converted event. The event->hw.config is used to replace the event->attr.config in common code. For client IMC events, the event->attr.config is used to calculate a converted event with standard encoding format in the custom event_init(). The converted event is stored in event->hw.config. For other events of freerunning counters, they already use the standard encoding format. The same value as event->attr.config is assigned to event->hw.config in common event_init(). Reported-by: Jin Yao <yao.jin@linux.intel.com> Tested-by: Jin Yao <yao.jin@linux.intel.com> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: stable@kernel.org # v4.18+ Fixes: 9aae1780 ("perf/x86/intel/uncore: Clean up client IMC uncore") Link: https://lkml.kernel.org/r/20190227165729.1861-1-kan.liang@linux.intel.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Alexander Shishkin authored
Currently, the AUX buffer allocator will use high-order allocations for PMUs that don't support hardware scatter-gather chaining to ensure large contiguous blocks of pages, and always use an array of single pages otherwise. There is, however, a tangible performance benefit in using larger chunks of contiguous memory even in the latter case, that comes from not having to fetch the next page's address at every page boundary. In particular, a task running under Intel PT on an Atom CPU shows 1.5%-2% less runtime penalty with a single multi-page output region in snapshot mode (no PMI) than with multiple single-page output regions, from ~6% down to ~4%. For the snapshot mode it does make a difference as it is intended to run over long periods of time. For this reason, change the allocation policy to always optimistically start with the highest possible order when allocating pages for the AUX buffer, desceding until the allocation succeeds or order zero allocation fails. Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Link: https://lkml.kernel.org/r/20190215114727.62648-2-alexander.shishkin@linux.intel.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
- 06 Mar, 2019 24 commits
-
-
Jiri Olsa authored
Making sure the data->file.path is zeroed on perf_data__open error path and in perf_data__close, so we don't double free it in case someone call it twice. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de> Cc: Nageswara R Sastry <nasastry@in.ibm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Link: http://lkml.kernel.org/r/20190305152536.21035-9-jolsa@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jiri Olsa authored
We can't call perf_data__close and subsequently perf_session__delete, because it will call perf_data__close again and cause double free for data->file.path. $ perf report -i . incompatible file format (rerun with -v to learn more) free(): double free detected in tcache 2 Aborted (core dumped) In fact we don't need to call perf_data__close at all, because at the time the got out_close is reached, session->data is already initialized, so the perf_data__close call will be triggered from perf_session__delete. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de> Cc: Nageswara R Sastry <nasastry@in.ibm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Fixes: 2d4f2799 ("perf data: Add global path holder") Link: http://lkml.kernel.org/r/20190305152536.21035-8-jolsa@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jiri Olsa authored
Currently we probe for precise_ip with user specified perf_event_attr, which might fail because of unsupported kernel features, which would get disabled during the open time anyway. Switching the probe to take place on simple hw cycles, so the following record sets proper precise_ip: # perf record -e cycles:P ls # perf evlist -v cycles:P: size: 112, ... precise_ip: 3, ... Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de> Cc: Nageswara R Sastry <nasastry@in.ibm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Link: http://lkml.kernel.org/r/20190305152536.21035-7-jolsa@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jiri Olsa authored
Read the caps/max_precise value and store it in struct perf_pmu to be used when setting the maximum precise_ip field in following patch. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de> Cc: Nageswara R Sastry <nasastry@in.ibm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Link: http://lkml.kernel.org/r/20190305152536.21035-5-jolsa@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jiri Olsa authored
We can't allocate he->srcline unconditionaly, only when new hist_entry is created. Moving he->srcline allocation into hist_entry__init function. Original-patch-by: Jonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de> Suggested-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Nageswara R Sastry <nasastry@in.ibm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Link: http://lkml.kernel.org/r/20190305152536.21035-4-jolsa@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jiri Olsa authored
Adding error path into hist_entry__init to unify error handling, so every new member does not need to free everything else. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: nageswara r sastry <nasastry@in.ibm.com> Link: http://lkml.kernel.org/r/20190305152536.21035-3-jolsa@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jiri Olsa authored
Ravi Bangoria reported that we fail with an empty NUMA node with the following message: $ lscpu NUMA node0 CPU(s): NUMA node1 CPU(s): 0-4 $ sudo ./perf c2c report node/cpu topology bugFailed setup nodes Fix this by detecting the empty node and keeping its CPU set empty. Reported-by: Nageswara R Sastry <nasastry@in.ibm.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20190305152536.21035-2-jolsa@kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Tony Jones authored
Support both Python2 and Python3 in the intel-pt-events.py script There may be differences in the ordering of output lines due to differences in dictionary ordering etc. However the format within lines should be unchanged. The use of 'from __future__' implies the minimum supported Python2 version is now v2.6 Signed-off-by: Tony Jones <tonyj@suse.de> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Link: http://lkml.kernel.org/r/fd26acf9-0c0f-717f-9664-a3c33043ce19@suse.deSigned-off-by: Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Tony Jones authored
Support both Python2 and Python3 in the event_analyzing_sample.py script There may be differences in the ordering of output lines due to differences in dictionary ordering etc. However the format within lines should be unchanged. The use of 'from __future__' implies the minimum supported Python2 version is now v2.6 Signed-off-by: Tony Jones <tonyj@suse.de> Cc: Feng Tang <feng.tang@intel.com> Link: http://lkml.kernel.org/r/20190302011903.2416-5-tonyj@suse.deSigned-off-by: Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Tony Jones authored
Support both Python 2 and Python 3 in the check-perf-trace.py script. There may be differences in the ordering of output lines due to differences in dictionary ordering etc. However the format within lines should be unchanged. The use of from __future__ implies the minimum supported version of Python2 is now v2.6 Signed-off-by: Tony Jones <tonyj@suse.de> Cc: Tom Zanussi <tzanussi@gmail.com> Link: http://lkml.kernel.org/r/20190302011903.2416-4-tonyj@suse.deSigned-off-by: Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Tony Jones authored
Support both Python2 and Python3 in the futex-contention.py script There may be differences in the ordering of output lines due to differences in dictionary ordering etc. However the format within lines should be unchanged. The use of 'from __future__' implies the minimum supported Python2 version is now v2.6 Signed-off-by: Tony Jones <tonyj@suse.de> Link: http://lkml.kernel.org/r/20190302011903.2416-3-tonyj@suse.deSigned-off-by: Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Tony Jones authored
Remove mixed indentation in Python scripts. Revert to either all tabs (most common form) or all spaces (4 or 8) depending on what was the intent of the original commit. This is necessary to complete Python3 support as it will flag an error if it encounters mixed indentation. Signed-off-by: Tony Jones <tonyj@suse.de> Link: http://lkml.kernel.org/r/20190302011903.2416-2-tonyj@suse.deSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jin Yao authored
Using the existing symbol_conf.pid_list_str and symbol_conf.tid_list_str logic. For example: perf diff --tid 13965 It'll only diff the samples for thread 13965. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1551791143-10334-4-git-send-email-yao.jin@linux.intel.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jin Yao authored
To improve 'perf diff', implement a --cpu filter option. Multiple CPUs can be provided as a comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2. Default is to report samples on all CPUs. For example, perf diff --cpu 0,1 It only diff the samples for CPU0 and CPU1. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1551791143-10334-3-git-send-email-yao.jin@linux.intel.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Jin Yao authored
To improve 'perf diff', implement a --time filter option to diff the samples within given time window. It supports time percent with multiple time ranges. The time string format is 'a%/n,b%/m,...' or 'a%-b%,c%-%d,...'. For example: Select the second 10% time slice to diff: perf diff --time 10%/2 Select from 0% to 10% time slice to diff: perf diff --time 0%-10% Select the first and the second 10% time slices to diff: perf diff --time 10%/1,10%/2 Select from 0% to 10% and 30% to 40% slices to diff: perf diff --time 0%-10%,30%-40% It also supports analysing samples within a given time window <start>,<stop>. Times have the format seconds.microseconds. If 'start' is not given (i.e., time string is ',x.y') then analysis starts at the beginning of the file. If the stop time is not given (i.e, time string is 'x.y,') then analysis goes to end of file. Time string is 'a1.b1,c1.d1:a2.b2,c2.d2'. Use ':' to separate timestamps for different perf.data files. For example, we get the timestamp information from perf script. perf script -i perf.data.old mgen 13940 [000] 3946.361400: ... perf script -i perf.data mgen 13940 [000] 3971.150589 ... perf diff --time 3946.361400,:3971.150589, It analyzes the perf.data.old from the timestamp 3946.361400 to the end of perf.data.old and analyzes the perf.data from the timestamp 3971.150589 to the end of perf.data. v4: --- Update abstime_str_dup(), let it return error if strdup is failed, and update __cmd_diff() accordingly. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1551791143-10334-2-git-send-email-yao.jin@linux.intel.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Andi Kleen authored
Add a utility function to fetch executable code. Convert one user over to it. There are more places doing that, but they do significantly different actions, so they are not easy to fit into a single library function. Committer changes: . No need to cast around, make 'buf' be a void pointer. . Rename it to thread__memcpy() to reflect the fact it is about copying a chunk of memory from a thread, i.e. from its address space. . No need to have it in a separate object file, move it to thread.[ch] . Check the return of map__load(), the original code didn't do it, but since we're moving this around, check that as well. Signed-off-by: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/r/20190305144758.12397-2-andi@firstfloor.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
We were hardcoding '6' as the max instruction name, and we have lots that are longer than that, see the diff from two 'P' printed TUI annotations for a libc function that uses instructions with long names, such as 'vpmovmskb' with its 9 chars: --- __strcmp_avx2.annotation.before 2019-03-06 16:31:39.368020425 -0300 +++ __strcmp_avx2.annotation 2019-03-06 16:32:12.079450508 -0300 @@ -2,284 +2,284 @@ Event: cycles:ppp Percent endbr64 - 0.10 mov %edi,%eax + 0.10 mov %edi,%eax - xor %edx,%edx + xor %edx,%edx - 3.54 vpxor %ymm7,%ymm7,%ymm7 + 3.54 vpxor %ymm7,%ymm7,%ymm7 - or %esi,%eax + or %esi,%eax - and $0xfff,%eax + and $0xfff,%eax - cmp $0xf80,%eax + cmp $0xf80,%eax - ↓ jg 370 + ↓ jg 370 - 27.07 vmovdqu (%rdi),%ymm1 + 27.07 vmovdqu (%rdi),%ymm1 - 7.97 vpcmpeqb (%rsi),%ymm1,%ymm0 + 7.97 vpcmpeqb (%rsi),%ymm1,%ymm0 - 2.15 vpminub %ymm1,%ymm0,%ymm0 + 2.15 vpminub %ymm1,%ymm0,%ymm0 - 4.09 vpcmpeqb %ymm7,%ymm0,%ymm0 + 4.09 vpcmpeqb %ymm7,%ymm0,%ymm0 - 0.43 vpmovmskb %ymm0,%ecx + 0.43 vpmovmskb %ymm0,%ecx - 1.53 test %ecx,%ecx + 1.53 test %ecx,%ecx - ↓ je b0 + ↓ je b0 - 5.26 tzcnt %ecx,%edx + 5.26 tzcnt %ecx,%edx - 18.40 movzbl (%rdi,%rdx,1),%eax + 18.40 movzbl (%rdi,%rdx,1),%eax - 7.09 movzbl (%rsi,%rdx,1),%edx + 7.09 movzbl (%rsi,%rdx,1),%edx - 3.34 sub %edx,%eax + 3.34 sub %edx,%eax 2.37 vzeroupper ← retq nop - 50: tzcnt %ecx,%edx + 50: tzcnt %ecx,%edx - movzbl 0x20(%rdi,%rdx,1),%eax + movzbl 0x20(%rdi,%rdx,1),%eax - movzbl 0x20(%rsi,%rdx,1),%edx + movzbl 0x20(%rsi,%rdx,1),%edx - sub %edx,%eax + sub %edx,%eax vzeroupper ← retq - data16 nopw %cs:0x0(%rax,%rax,1) + data16 nopw %cs:0x0(%rax,%rax,1) Reported-by: Travis Downs <travis.downs@gmail.com> LPU-Reference: CAOBGo4z1KfmWeOm6Et0cnX5Z6DWsG2PQbAvRn1MhVPJmXHrc5g@mail.gmail.com Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-89wsdd9h9g6bvq52sgp6d0u4@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 alternative instruction updates from Ingo Molnar: "Small RDTSCP opimization, enabled by the newly added ALTERNATIVE_3(), and other small improvements" * 'x86-alternatives-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/TSC: Use RDTSCP x86/alternatives: Add an ALTERNATIVE_3() macro x86/alternatives: Print containing function x86/alternatives: Add macro comments
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull scheduler updates from Ingo Molnar: "The main changes in this cycle were: - refcount conversions - Solve the rq->leaf_cfs_rq_list can of worms for real. - improve power-aware scheduling - add sysctl knob for Energy Aware Scheduling - documentation updates - misc other changes" * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (34 commits) kthread: Do not use TIMER_IRQSAFE kthread: Convert worker lock to raw spinlock sched/fair: Use non-atomic cpumask_{set,clear}_cpu() sched/fair: Remove unused 'sd' parameter from select_idle_smt() sched/wait: Use freezable_schedule() when possible sched/fair: Prune, fix and simplify the nohz_balancer_kick() comment block sched/fair: Explain LLC nohz kick condition sched/fair: Simplify nohz_balancer_kick() sched/topology: Fix percpu data types in struct sd_data & struct s_data sched/fair: Simplify post_init_entity_util_avg() by calling it with a task_struct pointer argument sched/fair: Fix O(nr_cgroups) in the load balancing path sched/fair: Optimize update_blocked_averages() sched/fair: Fix insertion in rq->leaf_cfs_rq_list sched/fair: Add tmp_alone_branch assertion sched/core: Use READ_ONCE()/WRITE_ONCE() in move_queued_task()/task_rq_lock() sched/debug: Initialize sd_sysctl_cpus if !CONFIG_CPUMASK_OFFSTACK sched/pelt: Skip updating util_est when utilization is higher than CPU's capacity sched/fair: Update scale invariance of PELT sched/fair: Move the rq_of() helper function sched/core: Convert task_struct.stack_refcount to refcount_t ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull perf updates from Ingo Molnar: "Lots of tooling updates - too many to list, here's a few highlights: - Various subcommand updates to 'perf trace', 'perf report', 'perf record', 'perf annotate', 'perf script', 'perf test', etc. - CPU and NUMA topology and affinity handling improvements, - HW tracing and HW support updates: - Intel PT updates - ARM CoreSight updates - vendor HW event updates - BPF updates - Tons of infrastructure updates, both on the build system and the library support side - Documentation updates. - ... and lots of other changes, see the changelog for details. Kernel side updates: - Tighten up kprobes blacklist handling, reduce the number of places where developers can install a kprobe and hang/crash the system. - Fix/enhance vma address filter handling. - Various PMU driver updates, small fixes and additions. - refcount_t conversions - BPF updates - error code propagation enhancements - misc other changes" * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (238 commits) perf script python: Add Python3 support to syscall-counts-by-pid.py perf script python: Add Python3 support to syscall-counts.py perf script python: Add Python3 support to stat-cpi.py perf script python: Add Python3 support to stackcollapse.py perf script python: Add Python3 support to sctop.py perf script python: Add Python3 support to powerpc-hcalls.py perf script python: Add Python3 support to net_dropmonitor.py perf script python: Add Python3 support to mem-phys-addr.py perf script python: Add Python3 support to failed-syscalls-by-pid.py perf script python: Add Python3 support to netdev-times.py perf tools: Add perf_exe() helper to find perf binary perf script: Handle missing fields with -F +.. perf data: Add perf_data__open_dir_data function perf data: Add perf_data__(create_dir|close_dir) functions perf data: Fail check_backup in case of error perf data: Make check_backup work over directories perf tools: Add rm_rf_perf_data function perf tools: Add pattern name checking to rm_rf perf tools: Add depth checking to rm_rf perf data: Add global path holder ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull locking updates from Ingo Molnar: "The biggest part of this tree is the new auto-generated atomics API wrappers by Mark Rutland. The primary motivation was to allow instrumentation without uglifying the primary source code. The linecount increase comes from adding the auto-generated files to the Git space as well: include/asm-generic/atomic-instrumented.h | 1689 ++++++++++++++++-- include/asm-generic/atomic-long.h | 1174 ++++++++++--- include/linux/atomic-fallback.h | 2295 +++++++++++++++++++++++++ include/linux/atomic.h | 1241 +------------ I preferred this approach, so that the full call stack of the (already complex) locking APIs is still fully visible in 'git grep'. But if this is excessive we could certainly hide them. There's a separate build-time mechanism to determine whether the headers are out of date (they should never be stale if we do our job right). Anyway, nothing from this should be visible to regular kernel developers. Other changes: - Add support for dynamic keys, which removes a source of false positives in the workqueue code, among other things (Bart Van Assche) - Updates to tools/memory-model (Andrea Parri, Paul E. McKenney) - qspinlock, wake_q and lockdep micro-optimizations (Waiman Long) - misc other updates and enhancements" * 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (48 commits) locking/lockdep: Shrink struct lock_class_key locking/lockdep: Add module_param to enable consistency checks lockdep/lib/tests: Test dynamic key registration lockdep/lib/tests: Fix run_tests.sh kernel/workqueue: Use dynamic lockdep keys for workqueues locking/lockdep: Add support for dynamic keys locking/lockdep: Verify whether lock objects are small enough to be used as class keys locking/lockdep: Check data structure consistency locking/lockdep: Reuse lock chains that have been freed locking/lockdep: Fix a comment in add_chain_cache() locking/lockdep: Introduce lockdep_next_lockchain() and lock_chain_count() locking/lockdep: Reuse list entries that are no longer in use locking/lockdep: Free lock classes that are no longer in use locking/lockdep: Update two outdated comments locking/lockdep: Make it easy to detect whether or not inside a selftest locking/lockdep: Split lockdep_free_key_range() and lockdep_reset_lock() locking/lockdep: Initialize the locks_before and locks_after lists earlier locking/lockdep: Make zap_class() remove all matching lock order entries locking/lockdep: Reorder struct lock_class members locking/lockdep: Avoid that add_chain_cache() adds an invalid chain to the cache ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull EFI updates from Ingo Molnar: "The main EFI changes in this cycle were: - Use 32-bit alignment for efi_guid_t - Allow the SetVirtualAddressMap() call to be omitted - Implement earlycon=efifb based on existing earlyprintk code - Various minor fixes and code cleanups from Sai, Ard and me" * 'efi-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: efi: Fix build error due to enum collision between efi.h and ima.h efi/x86: Convert x86 EFI earlyprintk into generic earlycon implementation x86: Make ARCH_USE_MEMREMAP_PROT a generic Kconfig symbol efi/arm/arm64: Allow SetVirtualAddressMap() to be omitted efi: Replace GPL license boilerplate with SPDX headers efi/fdt: Apply more cleanups efi: Use 32-bit alignment for efi_guid_t efi/memattr: Don't bail on zero VA if it equals the region's PA x86/efi: Mark can_free_region() as an __init function
-
Yang Wei authored
Delete a superfluous semicolon in getBPFObjectFromModule(). Signed-off-by: Yang Wei <yang.wei9@zte.com.cn> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Yang Wei <albin_yang@163.com> Link: http://lkml.kernel.org/r/1551710174-3349-1-git-send-email-albin_yang@163.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
The libbpf loader expects that some __btf_map_<MAP_NAME> structs be in place with the keys and values types of maps so that one can store the struct definitions and have them sent to the kernel via sys_bpf(fd, cmd = BTF_LOAD) and then later be retrievable via sys_bpf(fd, cmd = BPF_OBJ_GET_INFO_BY_FD) for use by tools such as 'bpftool map dump id MAP_ID'. Since we already have this for defining maps in 'perf trace' BPF events: bpf_map(name, _type, type_key, type_val, _max_entries) As used in the tools/perf/examples/bpf/augmented_raw_syscalls.c: --- 8< --- struct syscall { bool enabled; }; bpf_map(syscalls, ARRAY, int, struct syscall, 512); --- 8< --- All we need is to get all that already available info, piggyback on the 'bpf_map' define in tools/perf/include/bpf/bpf.h, that is included by 'perf trace' BPF programs and do that without requiring changes to the BPF programs already defining maps using 'bpf_map()'. So this is what we have before this patch: 1) With this in ~/.perfconfig to dump .c events as .o, aka save a copy so that we can use the .o later as a pre-compiled BPF bytecode: # grep '\[llvm\]' -A2 ~/.perfconfig [llvm] dump-obj = true clang-opt = -g # # clang --version clang version 9.0.0 (https://git.llvm.org/git/clang.git/ 7906282d3afec5dfdc2b27943fd6c0309086c507) (https://git.llvm.org/git/llvm.git/ a1b5de1ff8ae8bc79dc8e86e1f82565229bd0500) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /opt/llvm/bin 2) Note the -g there so that we get clang to generate debuginfo, and since the target is 'bpf' it will generate the BTF info in this clang version (9.0). 3) Run a simple 'perf record' specifiying as an event the augmented_raw_syscalls.c source code: # perf record -e /home/acme/git/perf/tools/perf/examples/bpf/augmented_raw_syscalls.c sleep 1 LLVM: dumping /home/acme/git/perf/tools/perf/examples/bpf/augmented_raw_syscalls.o [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.025 MB perf.data ] # file /home/acme/git/perf/tools/perf/examples/bpf/augmented_raw_syscalls.o /home/acme/git/perf/tools/perf/examples/bpf/augmented_raw_syscalls.o: ELF 64-bit LSB relocatable, eBPF, version 1 (SYSV), with debug_info, not stripped 4) Look at the BTF structs encoded in it: # pahole -F btf --sizes /home/acme/git/perf/tools/perf/examples/bpf/augmented_raw_syscalls.o syscall_enter_args 64 0 augmented_filename 264 0 syscall 1 0 syscall_exit_args 24 0 bpf_map 28 0 # # pahole -F btf -C syscalls /home/acme/git/perf/tools/perf/examples/bpf/augmented_raw_syscalls.o # pahole -F btf -C syscall /home/acme/git/perf/tools/perf/examples/bpf/augmented_raw_syscalls.o struct syscall { bool enabled; /* 0 1 */ /* size: 1, cachelines: 1, members: 1 */ /* last cacheline: 1 bytes */ }; # 5) Ok, with just this we don't have the markers expected by the libbpf loader and when we run with this BPF bytecode, because we have: # grep '\[trace\]' -A1 ~/.perfconfig [trace] add_events = /home/acme/git/perf/tools/perf/examples/bpf/augmented_raw_syscalls.o # 6) Lets do a 'perf trace' system wide session using this BPF program: # perf trace -e *mmsg,open* Cache2 I/O/6885 openat(AT_FDCWD, "/home/acme/.cache/mozilla/firefox/ina67tev.default/cache2/entries/BA220AB2914006A7AE96D27BE6EA13DD77519FCA", O_RDWR|O_CREAT|O_TRUNC, S_IRUSR|S_IWUSR) = 106 Cache2 I/O/6885 openat(AT_FDCWD, "/proc/self/mountinfo", O_RDONLY) = 121 Cache2 I/O/6885 openat(AT_FDCWD, "/proc/self/mountinfo", O_RDONLY) = 121 Cache2 I/O/6885 openat(AT_FDCWD, "/proc/self/mountinfo", O_RDONLY) = 121 Cache2 I/O/6885 openat(AT_FDCWD, "/proc/self/mountinfo", O_RDONLY) = 121 DNS Res~ver #3/23340 openat(AT_FDCWD, "/etc/hosts", O_RDONLY|O_CLOEXEC) = 106 DNS Res~ver #3/23340 sendmmsg(106<socket:[3482690]>, 0x7f252f1fcaf0, 2, MSG_NOSIGNAL) = 2 Cache2 I/O/6885 openat(AT_FDCWD, "/home/acme/.cache/mozilla/firefox/ina67tev.default/cache2/entries/BA220AB2914006A7AE96D27BE6EA13DD77519FCA", O_RDWR) = 106 lighttpd/18915 openat(AT_FDCWD, "/proc/loadavg", O_RDONLY) = 12 7) While it runs lets see the maps that 'perf trace' + libbpf's BPF loader loaded into the kernel via sys_bpf(fd, BPF_BTF_LOAD, ...): # bpftool map list | tail -6 149: perf_event_array name __augmented_sys flags 0x0 key 4B value 4B max_entries 8 memlock 4096B 150: array name syscalls flags 0x0 key 4B value 1B max_entries 512 memlock 8192B 151: hash name pids_filtered flags 0x0 key 4B value 1B max_entries 64 memlock 8192B # 8) Dump the "pids_filtered", map, that will have one entry per PID that 'perf trace' wants filtered, which includes its own, to avoid a tracing feedback loop (perf trace shows the syscalls it does which generates more syscalls that it has to show that...), it also auto-filters the 'gnome-terminal' and 'sshd' parent PIDs, for the same reason: # bpftool map dump id 151 key: a5 0c 00 00 value: 01 key: 14 63 00 00 value: 01 Found 2 elements # 9) Since there is no BTF info available, it does a generic hex dump :-\ 10) Now, with this patch applied, we'll do steps 3 to 6 again and look with pahole if there are extra structs encoded in BTF: # pahole -F btf --sizes /home/acme/git/perf/tools/perf/examples/bpf/augmented_raw_syscalls.o syscall_enter_args 64 0 augmented_filename 264 0 syscall 1 0 syscall_exit_args 24 0 bpf_map 28 0 ____btf_map___augmented_syscalls__ 8 0 ____btf_map_syscalls 8 0 ____btf_map_pids_filtered 8 0 # 11) Yes, those __btf_map_ + the map names, lets see how they look like: # pahole -F btf -C ____btf_map_syscalls /home/acme/git/perf/tools/perf/examples/bpf/augmented_raw_syscalls.o struct ____btf_map_syscalls { int key; /* 0 4 */ struct syscall value; /* 4 1 */ /* size: 8, cachelines: 1, members: 2 */ /* padding: 3 */ /* last cacheline: 8 bytes */ }; # 12) Lets repeat step 7 to get the new map ids: # bpftool map list | tail -6 155: perf_event_array name __augmented_sys flags 0x0 key 4B value 4B max_entries 8 memlock 4096B 156: array name syscalls flags 0x0 key 4B value 1B max_entries 512 memlock 8192B 157: hash name pids_filtered flags 0x0 key 4B value 1B max_entries 64 memlock 8192B # 13) And finally lets dump the 'pids_filtered': # bpftool map dump id 157 [{ "key": 3237, "value": true },{ "key": 26435, "value": true } ] # Looks much better! BTF info was used to interpret the key as an integer and the value as a struct with just one boolean member, so to make it more compact, show just the 'true' value where we saw '01'. Now to make 'perf trace --dump-map' to use BTF! Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexei Starovoitov <ast@fb.com> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Luis Cláudio Gonçalves <lclaudio@redhat.com> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Song Liu <songliubraving@fb.com> Cc: Wang Nan <wangnan0@huawei.com> Cc: Yonghong Song <yhs@fb.com> Link: https://lkml.kernel.org/n/tip-ybuf9wpkm30xk28iq7jbwb40@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
- 05 Mar, 2019 2 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull RCU updates from Ingo Molnar: "The main RCU related changes in this cycle were: - Additional cleanups after RCU flavor consolidation - Grace-period forward-progress cleanups and improvements - Documentation updates - Miscellaneous fixes - spin_is_locked() conversions to lockdep - SPDX changes to RCU source and header files - SRCU updates - Torture-test updates, including nolibc updates and moving nolibc to tools/include" * 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (71 commits) locking/locktorture: Convert to SPDX license identifier linux/torture: Convert to SPDX license identifier torture: Convert to SPDX license identifier linux/srcu: Convert to SPDX license identifier linux/rcutree: Convert to SPDX license identifier linux/rcutiny: Convert to SPDX license identifier linux/rcu_sync: Convert to SPDX license identifier linux/rcu_segcblist: Convert to SPDX license identifier linux/rcupdate: Convert to SPDX license identifier linux/rcu_node_tree: Convert to SPDX license identifier rcu/update: Convert to SPDX license identifier rcu/tree: Convert to SPDX license identifier rcu/tiny: Convert to SPDX license identifier rcu/sync: Convert to SPDX license identifier rcu/srcu: Convert to SPDX license identifier rcu/rcutorture: Convert to SPDX license identifier rcu/rcu_segcblist: Convert to SPDX license identifier rcu/rcuperf: Convert to SPDX license identifier rcu/rcu.h: Convert to SPDX license identifier RCU/torture.txt: Remove section MODULE PARAMETERS ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull year 2038 updates from Thomas Gleixner: "Another round of changes to make the kernel ready for 2038. After lots of preparatory work this is the first set of syscalls which are 2038 safe: 403 clock_gettime64 404 clock_settime64 405 clock_adjtime64 406 clock_getres_time64 407 clock_nanosleep_time64 408 timer_gettime64 409 timer_settime64 410 timerfd_gettime64 411 timerfd_settime64 412 utimensat_time64 413 pselect6_time64 414 ppoll_time64 416 io_pgetevents_time64 417 recvmmsg_time64 418 mq_timedsend_time64 419 mq_timedreceiv_time64 420 semtimedop_time64 421 rt_sigtimedwait_time64 422 futex_time64 423 sched_rr_get_interval_time64 The syscall numbers are identical all over the architectures" * 'timers-2038-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (36 commits) riscv: Use latest system call ABI checksyscalls: fix up mq_timedreceive and stat exceptions unicore32: Fix __ARCH_WANT_STAT64 definition asm-generic: Make time32 syscall numbers optional asm-generic: Drop getrlimit and setrlimit syscalls from default list 32-bit userspace ABI: introduce ARCH_32BIT_OFF_T config option compat ABI: use non-compat openat and open_by_handle_at variants y2038: add 64-bit time_t syscalls to all 32-bit architectures y2038: rename old time and utime syscalls y2038: remove struct definition redirects y2038: use time32 syscall names on 32-bit syscalls: remove obsolete __IGNORE_ macros y2038: syscalls: rename y2038 compat syscalls x86/x32: use time64 versions of sigtimedwait and recvmmsg timex: change syscalls to use struct __kernel_timex timex: use __kernel_timex internally sparc64: add custom adjtimex/clock_adjtime functions time: fix sys_timer_settime prototype time: Add struct __kernel_timex time: make adjtime compat handling available for 32 bit ...
-