- 09 May, 2012 14 commits
-
-
Peter Zijlstra authored
We can easily use a single callback for both sched-in and sched-out. This reduces the code footprint in the scheduler path as well as removes the PMU black spot otherwise present between the out and in callback. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/n/tip-o56ajxp1edwqg6x9d31wb805@git.kernel.orgSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Robert Richter authored
The value of IbsOpCurCnt rolls over when it reaches IbsOpMaxCnt. Thus, it is reset to zero by hardware. To get the correct count we need to add the max count to it in case we received an ibs sample (valid bit set). Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1333390758-10893-13-git-send-email-robert.richter@amd.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Robert Richter authored
After disabling IBS there could be still incomming NMIs with samples that even have the valid bit cleared. Mark all this NMIs as handled to avoid spurious interrupt messages. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1333390758-10893-12-git-send-email-robert.richter@amd.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Robert Richter authored
When disabling ibs there might be the case where hardware continuously generates interrupts. This is described in erratum #420 (Instruction- Based Sampling Engine May Generate Interrupt that Cannot Be Cleared). To avoid this we must clear the counter mask first and then clear the enable bit. This patch implements this. See Revision Guide for AMD Family 10h Processors, Publication #41322. Note: We now keep track of the last read ibs config value which is then used to disable ibs. To update the config value we pass now a pointer to the functions reading it. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1333390758-10893-11-git-send-email-robert.richter@amd.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Robert Richter authored
If the last hw period is too short we might hit the irq handler which biases the results. Thus try to have a max last period that triggers the sw overflow. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1333390758-10893-10-git-send-email-robert.richter@amd.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Robert Richter authored
There are cases where the remaining period is smaller than the minimal possible value. In this case the counter is restarted with the minimal period. This is of no use as the interrupt handler will trigger immediately again and most likely hits itself. This biases the results. So, if the remaining period is within the min range, we better do not restart the counter and instead trigger the overflow. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1333390758-10893-9-git-send-email-robert.richter@amd.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Robert Richter authored
Simple patch that just renames some variables for better understanding. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1333390758-10893-8-git-send-email-robert.richter@amd.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Robert Richter authored
This patch adds support for precise event sampling with IBS. There are two counting modes to count either cycles or micro-ops. If the corresponding performance counter events (hw events) are setup with the precise flag set, the request is redirected to the ibs pmu: perf record -a -e cpu-cycles:p ... # use ibs op counting cycle count perf record -a -e r076:p ... # same as -e cpu-cycles:p perf record -a -e r0C1:p ... # use ibs op counting micro-ops Each ibs sample contains a linear address that points to the instruction that was causing the sample to trigger. With ibs we have skid 0. Thus, ibs supports precise levels 1 and 2. Samples are marked with the PERF_EFLAGS_EXACT flag set. In rare cases the rip is invalid when IBS was not able to record the rip correctly. Then the PERF_EFLAGS_EXACT flag is cleared and the rip is taken from pt_regs. V2: * don't drop samples in precise level 2 if rip is invalid, instead support the PERF_EFLAGS_EXACT flag Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120502103309.GP18810@erda.amd.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Robert Richter authored
Each IBS sample contains a linear address of the instruction that caused the sample to trigger. This address is more precise than the rip that was taken from the interrupt handler's stack. Update the rip with that address. We use this in the next patch to implement precise-event sampling on AMD systems using IBS. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1333390758-10893-6-git-send-email-robert.richter@amd.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Robert Richter authored
Fixing profiling at a fixed frequency, in this case the freq value and sample period was setup incorrectly. Since sampling periods are adjusted we also allow periods that have lower 4 bits set. Another fix is the setup of the hw counter: If we modify hwc->sample_period, we also need to update hwc->last_period and hwc->period_left. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1333390758-10893-5-git-send-email-robert.richter@amd.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Robert Richter authored
Allow enabling ibs op micro-ops counting mode. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1333390758-10893-4-git-send-email-robert.richter@amd.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Robert Richter authored
We always need to pass the last sample period to perf_sample_data_init(), otherwise the event distribution will be wrong. Thus, modifiyng the function interface with the required period as argument. So basically a pattern like this: perf_sample_data_init(&data, ~0ULL); data.period = event->hw.last_period; will now be like that: perf_sample_data_init(&data, ~0ULL, event->hw.last_period); Avoids unininitialized data.period and simplifies code. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1333390758-10893-3-git-send-email-robert.richter@amd.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Robert Richter authored
The last sw period was not correctly updated on overflow and thus led to wrong distribution of events. We always need to properly initialize data.period in struct perf_sample_data. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1333390758-10893-2-git-send-email-robert.richter@amd.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Ingo Molnar authored
-
- 08 May, 2012 1 commit
-
-
Ingo Molnar authored
Merge branch 'perf/annotate' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core Perf annotate browser improvements: - Get back the line separating the overheads from the disassembly, requested by Peter Zijlstra, Linus agreed now that it is a solid line and more column real state was harvested. Also it has the jump->arrow lines separated from it by the address/jump target column. - Don't change asm line color when toggling source code view. Requested by Peter Zijlstra. Current snapshot: avtab_search_node │ push %rbp │ mov %rsp,%rbp │ → callq mcount │ movzwl 0x6(%rsi),%edx │ and $0x7fff,%dx │ test %rdi,%rdi │ ↓ jne 20 0.42 │17:┌─→xor %eax,%eax │19:│ leaveq 0.42 │ │← retq │ │ nopl 0x0(%rax,%rax,1) │20:│ mov (%rdi),%rax 0.08 │ │ test %rax,%rax │ └──je 17 │ movzwl (%rsi),%ecx │ movzwl 0x2(%rsi),%r9d │ movzwl 0x4(%rsi),%r8d │ movzwl %cx,%esi │ movzwl %r9w,%r10d │ shl $0x9,%esi │ lea (%rsi,%r10,4),%esi │ lea (%r8,%rsi,1),%esi │ and 0x10(%rdi),%si │ movzwl %si,%esi │ mov (%rax,%rsi,8),%rax 1.01 │ test %rax,%rax │ ↑ je 19 │ nopw 0x0(%rax,%rax,1) 3.19 │60: cmp %cx,(%rax) │ ↓ jne 7e 0.08 │ cmp %r9w,0x2(%rax) │ ↓ jne 7e │ cmp %r8w,0x4(%rax) │ ↓ jne 79 │ test %dx,0x6(%rax) │ ↑ jne 19 │79: cmp %r8w,0x4(%rax) 83.45 │7e: ↑ ja 17 3.36 │ mov 0x10(%rax),%rax 7.98 │ test %rax,%rax │ ↑ jne 60 │ leaveq │ ← retq Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
- 07 May, 2012 3 commits
-
-
Arnaldo Carvalho de Melo authored
Just suppress the nop operands, future infrastructure that will record the instruction lenght (and its contents) in struct ins will allow rendering them as nopN, i.e. nop5 for a 5-byte nop. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-qddbeglfzqdlal8vj2yaj67y@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
Instead of doing the same in all ins scnprintf methods. Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-8mfairi2n1nentoa852alazv@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Ingo Molnar authored
Merge branch 'tip/perf/core-4' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace into perf/core
-
- 04 May, 2012 1 commit
-
-
Steven Rostedt authored
If CONFIG_KPROBES is not set, then linux/kprobes.h will not include asm/kprobes.h needed by x86/ftrace.c for the BREAKPOINT macro. The x86/ftrace.c file should just include asm/kprobes.h as it does not need the rest of kprobes. Reported-by: Ingo Molnar <mingo@elte.hu> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
- 03 May, 2012 3 commits
-
-
Arnaldo Carvalho de Melo authored
Gets confusing. Remains to be chosen an appropriate different color for source code. This effectively reverts 58e817d9 ("perf annotate: Print asm code as blue when source code is displayed") Requested-by: Peter Zijlstra <peterz@infradead.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-qy9iq32nj3uqe5dbiuq9e3j9@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
The first column (columns in the near future) are for the per line event overhead(s), that only appear when they are not zero. To clearly separate it, add back a solid vertical line, with just one colour, not influenced by the per line overheads. Then have the addr/offset column, then optionally the dynamic (static in the future) jump->target arrows, if 'j' enables it. Then the instructions. Requested-by: Peter Zijlstra <peterz@infradead.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-r415t4sps0oyr9y8kd9j7clz@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-umb4jlu0ee8r2rc3x4jkahgk@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
- 28 Apr, 2012 2 commits
-
-
Steven Rostedt authored
As ftrace function tracing would require modifying code that could be executed in NMI context, which is not stopped with stop_machine(), ftrace had to do a complex algorithm with various stages of setup and memory barriers to make it work. With the new breakpoint method, this is no longer required. The changes to the code can be done without any problem in NMI context, as well as without stop machine altogether. Remove the complex code as it is no longer needed. Also, a lot of the notrace annotations could be removed from the NMI code as it is now safe to trace them. With the exception of do_nmi itself, which does some special work to handle running in the debug stack. The breakpoint method can cause NMIs to double nest the debug stack if it's not setup properly, and that is done in do_nmi(), thus that function must not be traced. (Note the arch sh may want to do the same) Cc: Paul Mundt <lethal@linux-sh.org> Cc: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt authored
This method changes x86 to add a breakpoint to the mcount locations instead of calling stop machine. Now that iret can be handled by NMIs, we perform the following to update code: 1) Add a breakpoint to all locations that will be modified 2) Sync all cores 3) Update all locations to be either a nop or call (except breakpoint op) 4) Sync all cores 5) Remove the breakpoint with the new code. 6) Sync all cores [ Added updates that Masami suggested: Use unlikely(modifying_ftrace_code) in int3 trap to keep kprobes efficient. Don't use NOTIFY_* in ftrace handler in int3 as it is not a notifier. ] Cc: H. Peter Anvin <hpa@zytor.com> Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
- 27 Apr, 2012 6 commits
-
-
Arnaldo Carvalho de Melo authored
Cleaning up more the output. Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-81pimnsnaa9y2j0a9plstu1c@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
It is confusing when used with jump -> target lines. Requested-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-xeiyfsxptwtmlvowledg6wpy@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
Instead of trying to show the current loop by naively looking for the next backward jump, just use 'j' to toggle showing arrows connecting jump with its target. And do it for forward jumps as well. Loop detection requires more code to follow the flow control, etc. Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-soahcn1lz2u4wxj31ch0594j@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
It figures out the direction and draws downwards arrows too if that is the case. Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-tg329nr7q4dg9d0tl3o0wywg@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
The counterpart of 'ret' instructions. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-jlz2ldaquaow0rqi2vr4b91l@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Ingo Molnar authored
Merge tag 'perf-annotate-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core Annotation improvements: Now the default annotate browser uses a much more compact format, implementing suggestions made made by several people, notably Linus. Here is part of the new __list_del_entry() annotation: __list_del_entry 8.47 │ push %rbp 8.47 │ mov (%rdi),%rdx 20.34 │ mov $0xdead000000100100,%rcx 3.39 │ mov 0x8(%rdi),%rax 0.00 │ mov %rsp,%rbp 1.69 │ cmp %rcx,%rdx 0.00 │ je 43 1.69 │ mov $0xdead000000200200,%rcx 3.39 │ cmp %rcx,%rax 0.00 │ je a3 5.08 │ mov (%rax),%r8 18.64 │ cmp %r8,%rdi 0.00 │ jne 84 1.69 │ mov 0x8(%rdx),%r8 25.42 │ cmp %r8,%rdi 0.00 │ jne 65 1.69 │ mov %rax,0x8(%rdx) 0.00 │ mov %rdx,(%rax) 0.00 │ leaveq 0.00 │ retq 0.00 │ 43: mov %rdx,%r8 0.00 │ mov %rdi,%rcx 0.00 │ mov $0xffffffff817cd6a8,%rdx 0.00 │ mov $0x31,%esi 0.00 │ mov $0xffffffff817cd6e0,%rdi 0.00 │ xor %eax,%eax 0.00 │ callq ffffffff8104eab0 <warn_slowpath_fmt> 0.00 │ leaveq 0.00 │ retq 0.00 │ 65: mov %rdi,%rcx 0.00 │ mov $0xffffffff817cd780,%rdx 0.00 │ mov $0x3a,%esi 0.00 │ mov $0xffffffff817cd6e0,%rdi 0.00 │ xor %eax,%eax 0.00 │ callq ffffffff8104eab0 <warn_slowpath_fmt> 0.00 │ leaveq 0.00 │ retq The infrastructure is there to provide formatters for any instruction, like the one I'll do for call functions to elide the address. Further fixes on top of the first iteration: - Sometimes a jump points to an offset with no instructions, make the mark jump targets function handle that, for now just ignoring such jump targets, more investigation is needed to figure out how to cope with that. - Handle jump targets that are outside the function, for now just don't try to draw the connector arrow, right thing seems to be to mark this jump with a -> (right arrow) and handle it like a callq. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
- 26 Apr, 2012 4 commits
-
-
Robert Richter authored
Renaming remaining PERF_COUNTERS options into PERF_EVENTS. Think we can get rid of PERF_COUNTERS now. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1333643084-26776-5-git-send-email-robert.richter@amd.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Robert Richter authored
No need to have an additional function layer. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1333643084-26776-4-git-send-email-robert.richter@amd.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Robert Richter authored
Now the return value of cmpxchg() is used to match an event. The change removes the duplicate event comparison and traverses the list until an event was removed. This also fixes the following warning: arch/x86/kernel/cpu/perf_event_amd.c:170: warning: value computed is not used Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1333643084-26776-3-git-send-email-robert.richter@amd.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Robert Richter authored
Removing duplicate code. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1333643084-26776-2-git-send-email-robert.richter@amd.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
- 25 Apr, 2012 6 commits
-
-
Arnaldo Carvalho de Melo authored
As described in the previous patch. Next step is to properly label those jumps by using a -> arrow, i.e. not backwards/forwards, and allow the user to navigate to this other function when enter or -> is pressed. Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-ax2sss463eu88wgl9ee8a6b6@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
I.e. jumps that go to code outside the current function, that is denoted in objdump -dS as: 399f877a9f: jne 399f87bcf4 <_L_lock_5154> I.e. without the + after the name of the current function, like in: 399f877aa5: jmp 399f877ab2 <_int_free+0x412> The browser will use that info to avoid drawing connectors to the start of the function, since ops.target.addr was zero. Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-xrn35g2mlawz1ydo1p73w3q6@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Arnaldo Carvalho de Melo authored
We were using ins_ops->target for callq addresses and jump offsets, disambiguate by having ins_ops->target.addr and ins_ops->target.offset. For jumps we'll need both to fixup lines that don't have an offset on the <> part. Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-3nlcmstua75u07ao7wja1rwx@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Ingo Molnar authored
A function name represents the pointer to it - no need to take the address of it. (Fixing this helps us introduce some macro magic around register_nmi_handler() in the future.) Cc: Robert Richter <robert.richter@amd.com> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Arnaldo Carvalho de Melo authored
In annotate_browser__mark_jump_targets 702 dlt = browser->offsets[dl->ops.target]; 703 bdlt = disasm_line__browser(dlt); 704 bdlt->jump_target = true; 705 } 706 707 } (gdb) p size $5 = 2415 (gdb) p offset $6 = 140 (gdb) p dl->ops.target $7 = 143 (gdb) p browser->offsets[143] $8 = (struct disasm_line *) 0x0 (gdb) p dl->name $9 = 0x2363bd0 "je" (gdb) Really strange, the code assumed that at the jump target we would have an assembly line, but only in the previous instruction offset we have a 'lock': (gdb) p browser->offsets[144] $10 = (struct disasm_line *) 0x0 (gdb) p browser->offsets[142] $11 = (struct disasm_line *) 0x27bd620 (gdb) p browser->offsets[142]->name $12 = 0x237a8a0 "lock" (gdb) I'll study this more, but for now I'll just check if there is a disasm_line at dl->ops.target, i.e. a valid jump target. Reported-by: Hagen Paul Pfeifer <hagen@jauu.net> Reported-by: Ingo Molnar <mingo@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-inzjrzyqhkzyv78met2vula6@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-
Ingo Molnar authored
Merge v3.4-rc4 - we were on -rc2 before. Signed-off-by: Ingo Molnar <mingo@kernel.org>
-