Commit e0972916 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull perf updates from Ingo Molnar:
 "Features:

   - Add "uretprobes" - an optimization to uprobes, like kretprobes are
     an optimization to kprobes.  "perf probe -x file sym%return" now
     works like kretprobes.  By Oleg Nesterov.

   - Introduce per core aggregation in 'perf stat', from Stephane
     Eranian.

   - Add memory profiling via PEBS, from Stephane Eranian.

   - Event group view for 'annotate' in --stdio, --tui and --gtk, from
     Namhyung Kim.

   - Add support for AMD NB and L2I "uncore" counters, by Jacob Shin.

   - Add Ivy Bridge-EP uncore support, by Zheng Yan

   - IBM zEnterprise EC12 oprofile support patchlet from Robert Richter.

   - Add perf test entries for checking breakpoint overflow signal
     handler issues, from Jiri Olsa.

   - Add perf test entry for for checking number of EXIT events, from
     Namhyung Kim.

   - Add perf test entries for checking --cpu in record and stat, from
     Jiri Olsa.

   - Introduce perf stat --repeat forever, from Frederik Deweerdt.

   - Add --no-demangle to report/top, from Namhyung Kim.

   - PowerPC fixes plus a couple of cleanups/optimizations in uprobes
     and trace_uprobes, by Oleg Nesterov.

  Various fixes and refactorings:

   - Fix dependency of the python binding wrt libtraceevent, from
     Naohiro Aota.

   - Simplify some perf_evlist methods and to allow 'stat' to share code
     with 'record' and 'trace', by Arnaldo Carvalho de Melo.

   - Remove dead code in related to libtraceevent integration, from
     Namhyung Kim.

   - Revert "perf sched: Handle PERF_RECORD_EXIT events" to get 'perf
     sched lat' back working, by Arnaldo Carvalho de Melo

   - We don't use Newt anymore, just plain libslang, by Arnaldo Carvalho
     de Melo.

   - Kill a bunch of die() calls, from Namhyung Kim.

   - Fix build on non-glibc systems due to libio.h absence, from Cody P
     Schafer.

   - Remove some perf_session and tracing dead code, from David Ahern.

   - Honor parallel jobs, fix from Borislav Petkov

   - Introduce tools/lib/lk library, initially just removing duplication
     among tools/perf and tools/vm.  from Borislav Petkov

  ... and many more I missed to list, see the shortlog and git log for
  more details."

* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (136 commits)
  perf/x86/intel/P4: Robistify P4 PMU types
  perf/x86/amd: Fix AMD NB and L2I "uncore" support
  perf/x86/amd: Remove old-style NB counter support from perf_event_amd.c
  perf/x86: Check all MSRs before passing hw check
  perf/x86/amd: Add support for AMD NB and L2I "uncore" counters
  perf/x86/intel: Add Ivy Bridge-EP uncore support
  perf/x86/intel: Fix SNB-EP CBO and PCU uncore PMU filter management
  perf/x86: Avoid kfree() in CPU_{STARTING,DYING}
  uprobes/perf: Avoid perf_trace_buf_prepare/submit if ->perf_events is empty
  uprobes/tracing: Don't pass addr=ip to perf_trace_buf_submit()
  uprobes/tracing: Change create_trace_uprobe() to support uretprobes
  uprobes/tracing: Make seq_printf() code uretprobe-friendly
  uprobes/tracing: Make register_uprobe_event() paths uretprobe-friendly
  uprobes/tracing: Make uprobe_{trace,perf}_print() uretprobe-friendly
  uprobes/tracing: Introduce is_ret_probe() and uretprobe_dispatcher()
  uprobes/tracing: Introduce uprobe_{trace,perf}_print() helpers
  uprobes/tracing: Generalize struct uprobe_trace_entry_head
  uprobes/tracing: Kill the pointless local_save_flags/preempt_count calls
  uprobes/tracing: Kill the pointless seq_print_ip_sym() call
  uprobes/tracing: Kill the pointless task_pt_regs() calls
  ...
parents 1f889ec6 5ac2b5c2
Uprobe-tracer: Uprobe-based Event Tracing Uprobe-tracer: Uprobe-based Event Tracing
========================================= =========================================
Documentation written by Srikar Dronamraju Documentation written by Srikar Dronamraju
Overview Overview
-------- --------
Uprobe based trace events are similar to kprobe based trace events. Uprobe based trace events are similar to kprobe based trace events.
...@@ -13,16 +15,18 @@ current_tracer. Instead of that, add probe points via ...@@ -13,16 +15,18 @@ current_tracer. Instead of that, add probe points via
/sys/kernel/debug/tracing/events/uprobes/<EVENT>/enabled. /sys/kernel/debug/tracing/events/uprobes/<EVENT>/enabled.
However unlike kprobe-event tracer, the uprobe event interface expects the However unlike kprobe-event tracer, the uprobe event interface expects the
user to calculate the offset of the probepoint in the object user to calculate the offset of the probepoint in the object.
Synopsis of uprobe_tracer Synopsis of uprobe_tracer
------------------------- -------------------------
p[:[GRP/]EVENT] PATH:SYMBOL[+offs] [FETCHARGS] : Set a probe p[:[GRP/]EVENT] PATH:SYMBOL[+offs] [FETCHARGS] : Set a uprobe
r[:[GRP/]EVENT] PATH:SYMBOL[+offs] [FETCHARGS] : Set a return uprobe (uretprobe)
GRP : Group name. If omitted, use "uprobes" for it. -:[GRP/]EVENT : Clear uprobe or uretprobe event
EVENT : Event name. If omitted, the event name is generated
based on SYMBOL+offs. GRP : Group name. If omitted, "uprobes" is the default value.
PATH : path to an executable or a library. EVENT : Event name. If omitted, the event name is generated based
on SYMBOL+offs.
PATH : Path to an executable or a library.
SYMBOL[+offs] : Symbol+offset where the probe is inserted. SYMBOL[+offs] : Symbol+offset where the probe is inserted.
FETCHARGS : Arguments. Each probe can have up to 128 args. FETCHARGS : Arguments. Each probe can have up to 128 args.
...@@ -30,27 +34,36 @@ Synopsis of uprobe_tracer ...@@ -30,27 +34,36 @@ Synopsis of uprobe_tracer
Event Profiling Event Profiling
--------------- ---------------
You can check the total number of probe hits and probe miss-hits via You can check the total number of probe hits and probe miss-hits via
/sys/kernel/debug/tracing/uprobe_profile. /sys/kernel/debug/tracing/uprobe_profile.
The first column is event name, the second is the number of probe hits, The first column is event name, the second is the number of probe hits,
the third is the number of probe miss-hits. the third is the number of probe miss-hits.
Usage examples Usage examples
-------------- --------------
To add a probe as a new event, write a new definition to uprobe_events * Add a probe as a new uprobe event, write a new definition to uprobe_events
as below. as below: (sets a uprobe at an offset of 0x4245c0 in the executable /bin/bash)
echo 'p: /bin/bash:0x4245c0' > /sys/kernel/debug/tracing/uprobe_events echo 'p: /bin/bash:0x4245c0' > /sys/kernel/debug/tracing/uprobe_events
This sets a uprobe at an offset of 0x4245c0 in the executable /bin/bash * Add a probe as a new uretprobe event:
echo > /sys/kernel/debug/tracing/uprobe_events echo 'r: /bin/bash:0x4245c0' > /sys/kernel/debug/tracing/uprobe_events
* Unset registered event:
echo '-:bash_0x4245c0' >> /sys/kernel/debug/tracing/uprobe_events
* Print out the events that are registered:
This clears all probe points. cat /sys/kernel/debug/tracing/uprobe_events
The following example shows how to dump the instruction pointer and %ax * Clear all events:
a register at the probed text address. Here we are trying to probe
function zfree in /bin/zsh echo > /sys/kernel/debug/tracing/uprobe_events
Following example shows how to dump the instruction pointer and %ax register
at the probed text address. Probe zfree function in /bin/zsh:
# cd /sys/kernel/debug/tracing/ # cd /sys/kernel/debug/tracing/
# cat /proc/`pgrep zsh`/maps | grep /bin/zsh | grep r-xp # cat /proc/`pgrep zsh`/maps | grep /bin/zsh | grep r-xp
...@@ -58,22 +71,27 @@ function zfree in /bin/zsh ...@@ -58,22 +71,27 @@ function zfree in /bin/zsh
# objdump -T /bin/zsh | grep -w zfree # objdump -T /bin/zsh | grep -w zfree
0000000000446420 g DF .text 0000000000000012 Base zfree 0000000000446420 g DF .text 0000000000000012 Base zfree
0x46420 is the offset of zfree in object /bin/zsh that is loaded at 0x46420 is the offset of zfree in object /bin/zsh that is loaded at
0x00400000. Hence the command to probe would be : 0x00400000. Hence the command to uprobe would be:
# echo 'p:zfree_entry /bin/zsh:0x46420 %ip %ax' > uprobe_events
# echo 'p /bin/zsh:0x46420 %ip %ax' > uprobe_events And the same for the uretprobe would be:
Please note: User has to explicitly calculate the offset of the probepoint # echo 'r:zfree_exit /bin/zsh:0x46420 %ip %ax' >> uprobe_events
Please note: User has to explicitly calculate the offset of the probe-point
in the object. We can see the events that are registered by looking at the in the object. We can see the events that are registered by looking at the
uprobe_events file. uprobe_events file.
# cat uprobe_events # cat uprobe_events
p:uprobes/p_zsh_0x46420 /bin/zsh:0x00046420 arg1=%ip arg2=%ax p:uprobes/zfree_entry /bin/zsh:0x00046420 arg1=%ip arg2=%ax
r:uprobes/zfree_exit /bin/zsh:0x00046420 arg1=%ip arg2=%ax
The format of events can be seen by viewing the file events/uprobes/p_zsh_0x46420/format Format of events can be seen by viewing the file events/uprobes/zfree_entry/format
# cat events/uprobes/p_zsh_0x46420/format # cat events/uprobes/zfree_entry/format
name: p_zsh_0x46420 name: zfree_entry
ID: 922 ID: 922
format: format:
field:unsigned short common_type; offset:0; size:2; signed:0; field:unsigned short common_type; offset:0; size:2; signed:0;
...@@ -94,6 +112,7 @@ events, you need to enable it by: ...@@ -94,6 +112,7 @@ events, you need to enable it by:
# echo 1 > events/uprobes/enable # echo 1 > events/uprobes/enable
Lets disable the event after sleeping for some time. Lets disable the event after sleeping for some time.
# sleep 20 # sleep 20
# echo 0 > events/uprobes/enable # echo 0 > events/uprobes/enable
...@@ -104,10 +123,11 @@ And you can see the traced information via /sys/kernel/debug/tracing/trace. ...@@ -104,10 +123,11 @@ And you can see the traced information via /sys/kernel/debug/tracing/trace.
# #
# TASK-PID CPU# TIMESTAMP FUNCTION # TASK-PID CPU# TIMESTAMP FUNCTION
# | | | | | # | | | | |
zsh-24842 [006] 258544.995456: p_zsh_0x46420: (0x446420) arg1=446421 arg2=79 zsh-24842 [006] 258544.995456: zfree_entry: (0x446420) arg1=446420 arg2=79
zsh-24842 [007] 258545.000270: p_zsh_0x46420: (0x446420) arg1=446421 arg2=79 zsh-24842 [007] 258545.000270: zfree_exit: (0x446540 <- 0x446420) arg1=446540 arg2=0
zsh-24842 [002] 258545.043929: p_zsh_0x46420: (0x446420) arg1=446421 arg2=79 zsh-24842 [002] 258545.043929: zfree_entry: (0x446420) arg1=446420 arg2=79
zsh-24842 [004] 258547.046129: p_zsh_0x46420: (0x446420) arg1=446421 arg2=79 zsh-24842 [004] 258547.046129: zfree_exit: (0x446540 <- 0x446420) arg1=446540 arg2=0
Each line shows us probes were triggered for a pid 24842 with ip being Output shows us uprobe was triggered for a pid 24842 with ip being 0x446420
0x446421 and contents of ax register being 79. and contents of ax register being 79. And uretprobe was triggered with ip at
0x446540 with counterpart function entry at 0x446420.
...@@ -1332,11 +1332,11 @@ kernelversion: ...@@ -1332,11 +1332,11 @@ kernelversion:
# Clear a bunch of variables before executing the submake # Clear a bunch of variables before executing the submake
tools/: FORCE tools/: FORCE
$(Q)mkdir -p $(objtree)/tools $(Q)mkdir -p $(objtree)/tools
$(Q)$(MAKE) LDFLAGS= MAKEFLAGS= O=$(objtree) subdir=tools -C $(src)/tools/ $(Q)$(MAKE) LDFLAGS= MAKEFLAGS="$(filter --j% -j,$(MAKEFLAGS))" O=$(objtree) subdir=tools -C $(src)/tools/
tools/%: FORCE tools/%: FORCE
$(Q)mkdir -p $(objtree)/tools $(Q)mkdir -p $(objtree)/tools
$(Q)$(MAKE) LDFLAGS= MAKEFLAGS= O=$(objtree) subdir=tools -C $(src)/tools/ $* $(Q)$(MAKE) LDFLAGS= MAKEFLAGS="$(filter --j% -j,$(MAKEFLAGS))" O=$(objtree) subdir=tools -C $(src)/tools/ $*
# Single targets # Single targets
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
......
...@@ -51,4 +51,5 @@ extern int arch_uprobe_post_xol(struct arch_uprobe *aup, struct pt_regs *regs); ...@@ -51,4 +51,5 @@ extern int arch_uprobe_post_xol(struct arch_uprobe *aup, struct pt_regs *regs);
extern bool arch_uprobe_xol_was_trapped(struct task_struct *tsk); extern bool arch_uprobe_xol_was_trapped(struct task_struct *tsk);
extern int arch_uprobe_exception_notify(struct notifier_block *self, unsigned long val, void *data); extern int arch_uprobe_exception_notify(struct notifier_block *self, unsigned long val, void *data);
extern void arch_uprobe_abort_xol(struct arch_uprobe *aup, struct pt_regs *regs); extern void arch_uprobe_abort_xol(struct arch_uprobe *aup, struct pt_regs *regs);
extern unsigned long arch_uretprobe_hijack_return_addr(unsigned long trampoline_vaddr, struct pt_regs *regs);
#endif /* _ASM_UPROBES_H */ #endif /* _ASM_UPROBES_H */
...@@ -30,6 +30,16 @@ ...@@ -30,6 +30,16 @@
#define UPROBE_TRAP_NR UINT_MAX #define UPROBE_TRAP_NR UINT_MAX
/**
* is_trap_insn - check if the instruction is a trap variant
* @insn: instruction to be checked.
* Returns true if @insn is a trap variant.
*/
bool is_trap_insn(uprobe_opcode_t *insn)
{
return (is_trap(*insn));
}
/** /**
* arch_uprobe_analyze_insn * arch_uprobe_analyze_insn
* @mm: the probed address space. * @mm: the probed address space.
...@@ -43,12 +53,6 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, ...@@ -43,12 +53,6 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe,
if (addr & 0x03) if (addr & 0x03)
return -EINVAL; return -EINVAL;
/*
* We currently don't support a uprobe on an already
* existing breakpoint instruction underneath
*/
if (is_trap(auprobe->ainsn))
return -ENOTSUPP;
return 0; return 0;
} }
...@@ -188,3 +192,16 @@ bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs) ...@@ -188,3 +192,16 @@ bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs)
return false; return false;
} }
unsigned long
arch_uretprobe_hijack_return_addr(unsigned long trampoline_vaddr, struct pt_regs *regs)
{
unsigned long orig_ret_vaddr;
orig_ret_vaddr = regs->link;
/* Replace the return addr with trampoline addr */
regs->link = trampoline_vaddr;
return orig_ret_vaddr;
}
...@@ -440,6 +440,7 @@ static int oprofile_hwsampler_init(struct oprofile_operations *ops) ...@@ -440,6 +440,7 @@ static int oprofile_hwsampler_init(struct oprofile_operations *ops)
switch (id.machine) { switch (id.machine) {
case 0x2097: case 0x2098: ops->cpu_type = "s390/z10"; break; case 0x2097: case 0x2098: ops->cpu_type = "s390/z10"; break;
case 0x2817: case 0x2818: ops->cpu_type = "s390/z196"; break; case 0x2817: case 0x2818: ops->cpu_type = "s390/z196"; break;
case 0x2827: ops->cpu_type = "s390/zEC12"; break;
default: return -ENODEV; default: return -ENODEV;
} }
} }
......
...@@ -168,6 +168,7 @@ ...@@ -168,6 +168,7 @@
#define X86_FEATURE_TOPOEXT (6*32+22) /* topology extensions CPUID leafs */ #define X86_FEATURE_TOPOEXT (6*32+22) /* topology extensions CPUID leafs */
#define X86_FEATURE_PERFCTR_CORE (6*32+23) /* core performance counter extensions */ #define X86_FEATURE_PERFCTR_CORE (6*32+23) /* core performance counter extensions */
#define X86_FEATURE_PERFCTR_NB (6*32+24) /* NB performance counter extensions */ #define X86_FEATURE_PERFCTR_NB (6*32+24) /* NB performance counter extensions */
#define X86_FEATURE_PERFCTR_L2 (6*32+28) /* L2 performance counter extensions */
/* /*
* Auxiliary flags: Linux defined - For features scattered in various * Auxiliary flags: Linux defined - For features scattered in various
...@@ -311,6 +312,7 @@ extern const char * const x86_power_flags[32]; ...@@ -311,6 +312,7 @@ extern const char * const x86_power_flags[32];
#define cpu_has_pclmulqdq boot_cpu_has(X86_FEATURE_PCLMULQDQ) #define cpu_has_pclmulqdq boot_cpu_has(X86_FEATURE_PCLMULQDQ)
#define cpu_has_perfctr_core boot_cpu_has(X86_FEATURE_PERFCTR_CORE) #define cpu_has_perfctr_core boot_cpu_has(X86_FEATURE_PERFCTR_CORE)
#define cpu_has_perfctr_nb boot_cpu_has(X86_FEATURE_PERFCTR_NB) #define cpu_has_perfctr_nb boot_cpu_has(X86_FEATURE_PERFCTR_NB)
#define cpu_has_perfctr_l2 boot_cpu_has(X86_FEATURE_PERFCTR_L2)
#define cpu_has_cx8 boot_cpu_has(X86_FEATURE_CX8) #define cpu_has_cx8 boot_cpu_has(X86_FEATURE_CX8)
#define cpu_has_cx16 boot_cpu_has(X86_FEATURE_CX16) #define cpu_has_cx16 boot_cpu_has(X86_FEATURE_CX16)
#define cpu_has_eager_fpu boot_cpu_has(X86_FEATURE_EAGER_FPU) #define cpu_has_eager_fpu boot_cpu_has(X86_FEATURE_EAGER_FPU)
......
...@@ -24,45 +24,45 @@ ...@@ -24,45 +24,45 @@
#define ARCH_P4_CNTRVAL_MASK ((1ULL << ARCH_P4_CNTRVAL_BITS) - 1) #define ARCH_P4_CNTRVAL_MASK ((1ULL << ARCH_P4_CNTRVAL_BITS) - 1)
#define ARCH_P4_UNFLAGGED_BIT ((1ULL) << (ARCH_P4_CNTRVAL_BITS - 1)) #define ARCH_P4_UNFLAGGED_BIT ((1ULL) << (ARCH_P4_CNTRVAL_BITS - 1))
#define P4_ESCR_EVENT_MASK 0x7e000000U #define P4_ESCR_EVENT_MASK 0x7e000000ULL
#define P4_ESCR_EVENT_SHIFT 25 #define P4_ESCR_EVENT_SHIFT 25
#define P4_ESCR_EVENTMASK_MASK 0x01fffe00U #define P4_ESCR_EVENTMASK_MASK 0x01fffe00ULL
#define P4_ESCR_EVENTMASK_SHIFT 9 #define P4_ESCR_EVENTMASK_SHIFT 9
#define P4_ESCR_TAG_MASK 0x000001e0U #define P4_ESCR_TAG_MASK 0x000001e0ULL
#define P4_ESCR_TAG_SHIFT 5 #define P4_ESCR_TAG_SHIFT 5
#define P4_ESCR_TAG_ENABLE 0x00000010U #define P4_ESCR_TAG_ENABLE 0x00000010ULL
#define P4_ESCR_T0_OS 0x00000008U #define P4_ESCR_T0_OS 0x00000008ULL
#define P4_ESCR_T0_USR 0x00000004U #define P4_ESCR_T0_USR 0x00000004ULL
#define P4_ESCR_T1_OS 0x00000002U #define P4_ESCR_T1_OS 0x00000002ULL
#define P4_ESCR_T1_USR 0x00000001U #define P4_ESCR_T1_USR 0x00000001ULL
#define P4_ESCR_EVENT(v) ((v) << P4_ESCR_EVENT_SHIFT) #define P4_ESCR_EVENT(v) ((v) << P4_ESCR_EVENT_SHIFT)
#define P4_ESCR_EMASK(v) ((v) << P4_ESCR_EVENTMASK_SHIFT) #define P4_ESCR_EMASK(v) ((v) << P4_ESCR_EVENTMASK_SHIFT)
#define P4_ESCR_TAG(v) ((v) << P4_ESCR_TAG_SHIFT) #define P4_ESCR_TAG(v) ((v) << P4_ESCR_TAG_SHIFT)
#define P4_CCCR_OVF 0x80000000U #define P4_CCCR_OVF 0x80000000ULL
#define P4_CCCR_CASCADE 0x40000000U #define P4_CCCR_CASCADE 0x40000000ULL
#define P4_CCCR_OVF_PMI_T0 0x04000000U #define P4_CCCR_OVF_PMI_T0 0x04000000ULL
#define P4_CCCR_OVF_PMI_T1 0x08000000U #define P4_CCCR_OVF_PMI_T1 0x08000000ULL
#define P4_CCCR_FORCE_OVF 0x02000000U #define P4_CCCR_FORCE_OVF 0x02000000ULL
#define P4_CCCR_EDGE 0x01000000U #define P4_CCCR_EDGE 0x01000000ULL
#define P4_CCCR_THRESHOLD_MASK 0x00f00000U #define P4_CCCR_THRESHOLD_MASK 0x00f00000ULL
#define P4_CCCR_THRESHOLD_SHIFT 20 #define P4_CCCR_THRESHOLD_SHIFT 20
#define P4_CCCR_COMPLEMENT 0x00080000U #define P4_CCCR_COMPLEMENT 0x00080000ULL
#define P4_CCCR_COMPARE 0x00040000U #define P4_CCCR_COMPARE 0x00040000ULL
#define P4_CCCR_ESCR_SELECT_MASK 0x0000e000U #define P4_CCCR_ESCR_SELECT_MASK 0x0000e000ULL
#define P4_CCCR_ESCR_SELECT_SHIFT 13 #define P4_CCCR_ESCR_SELECT_SHIFT 13
#define P4_CCCR_ENABLE 0x00001000U #define P4_CCCR_ENABLE 0x00001000ULL
#define P4_CCCR_THREAD_SINGLE 0x00010000U #define P4_CCCR_THREAD_SINGLE 0x00010000ULL
#define P4_CCCR_THREAD_BOTH 0x00020000U #define P4_CCCR_THREAD_BOTH 0x00020000ULL
#define P4_CCCR_THREAD_ANY 0x00030000U #define P4_CCCR_THREAD_ANY 0x00030000ULL
#define P4_CCCR_RESERVED 0x00000fffU #define P4_CCCR_RESERVED 0x00000fffULL
#define P4_CCCR_THRESHOLD(v) ((v) << P4_CCCR_THRESHOLD_SHIFT) #define P4_CCCR_THRESHOLD(v) ((v) << P4_CCCR_THRESHOLD_SHIFT)
#define P4_CCCR_ESEL(v) ((v) << P4_CCCR_ESCR_SELECT_SHIFT) #define P4_CCCR_ESEL(v) ((v) << P4_CCCR_ESCR_SELECT_SHIFT)
#define P4_GEN_ESCR_EMASK(class, name, bit) \ #define P4_GEN_ESCR_EMASK(class, name, bit) \
class##__##name = ((1 << bit) << P4_ESCR_EVENTMASK_SHIFT) class##__##name = ((1ULL << bit) << P4_ESCR_EVENTMASK_SHIFT)
#define P4_ESCR_EMASK_BIT(class, name) class##__##name #define P4_ESCR_EMASK_BIT(class, name) class##__##name
/* /*
...@@ -107,7 +107,7 @@ ...@@ -107,7 +107,7 @@
* P4_PEBS_CONFIG_MASK and related bits on * P4_PEBS_CONFIG_MASK and related bits on
* modification.) * modification.)
*/ */
#define P4_CONFIG_ALIASABLE (1 << 9) #define P4_CONFIG_ALIASABLE (1ULL << 9)
/* /*
* The bits we allow to pass for RAW events * The bits we allow to pass for RAW events
...@@ -784,17 +784,17 @@ enum P4_ESCR_EMASKS { ...@@ -784,17 +784,17 @@ enum P4_ESCR_EMASKS {
* Note we have UOP and PEBS bits reserved for now * Note we have UOP and PEBS bits reserved for now
* just in case if we will need them once * just in case if we will need them once
*/ */
#define P4_PEBS_CONFIG_ENABLE (1 << 7) #define P4_PEBS_CONFIG_ENABLE (1ULL << 7)
#define P4_PEBS_CONFIG_UOP_TAG (1 << 8) #define P4_PEBS_CONFIG_UOP_TAG (1ULL << 8)
#define P4_PEBS_CONFIG_METRIC_MASK 0x3f #define P4_PEBS_CONFIG_METRIC_MASK 0x3FLL
#define P4_PEBS_CONFIG_MASK 0xff #define P4_PEBS_CONFIG_MASK 0xFFLL
/* /*
* mem: Only counters MSR_IQ_COUNTER4 (16) and * mem: Only counters MSR_IQ_COUNTER4 (16) and
* MSR_IQ_COUNTER5 (17) are allowed for PEBS sampling * MSR_IQ_COUNTER5 (17) are allowed for PEBS sampling
*/ */
#define P4_PEBS_ENABLE 0x02000000U #define P4_PEBS_ENABLE 0x02000000ULL
#define P4_PEBS_ENABLE_UOP_TAG 0x01000000U #define P4_PEBS_ENABLE_UOP_TAG 0x01000000ULL
#define p4_config_unpack_metric(v) (((u64)(v)) & P4_PEBS_CONFIG_METRIC_MASK) #define p4_config_unpack_metric(v) (((u64)(v)) & P4_PEBS_CONFIG_METRIC_MASK)
#define p4_config_unpack_pebs(v) (((u64)(v)) & P4_PEBS_CONFIG_MASK) #define p4_config_unpack_pebs(v) (((u64)(v)) & P4_PEBS_CONFIG_MASK)
......
...@@ -55,4 +55,5 @@ extern int arch_uprobe_post_xol(struct arch_uprobe *aup, struct pt_regs *regs); ...@@ -55,4 +55,5 @@ extern int arch_uprobe_post_xol(struct arch_uprobe *aup, struct pt_regs *regs);
extern bool arch_uprobe_xol_was_trapped(struct task_struct *tsk); extern bool arch_uprobe_xol_was_trapped(struct task_struct *tsk);
extern int arch_uprobe_exception_notify(struct notifier_block *self, unsigned long val, void *data); extern int arch_uprobe_exception_notify(struct notifier_block *self, unsigned long val, void *data);
extern void arch_uprobe_abort_xol(struct arch_uprobe *aup, struct pt_regs *regs); extern void arch_uprobe_abort_xol(struct arch_uprobe *aup, struct pt_regs *regs);
extern unsigned long arch_uretprobe_hijack_return_addr(unsigned long trampoline_vaddr, struct pt_regs *regs);
#endif /* _ASM_UPROBES_H */ #endif /* _ASM_UPROBES_H */
...@@ -72,6 +72,7 @@ ...@@ -72,6 +72,7 @@
#define MSR_IA32_PEBS_ENABLE 0x000003f1 #define MSR_IA32_PEBS_ENABLE 0x000003f1
#define MSR_IA32_DS_AREA 0x00000600 #define MSR_IA32_DS_AREA 0x00000600
#define MSR_IA32_PERF_CAPABILITIES 0x00000345 #define MSR_IA32_PERF_CAPABILITIES 0x00000345
#define MSR_PEBS_LD_LAT_THRESHOLD 0x000003f6
#define MSR_MTRRfix64K_00000 0x00000250 #define MSR_MTRRfix64K_00000 0x00000250
#define MSR_MTRRfix16K_80000 0x00000258 #define MSR_MTRRfix16K_80000 0x00000258
...@@ -195,6 +196,10 @@ ...@@ -195,6 +196,10 @@
#define MSR_AMD64_IBSBRTARGET 0xc001103b #define MSR_AMD64_IBSBRTARGET 0xc001103b
#define MSR_AMD64_IBS_REG_COUNT_MAX 8 /* includes MSR_AMD64_IBSBRTARGET */ #define MSR_AMD64_IBS_REG_COUNT_MAX 8 /* includes MSR_AMD64_IBSBRTARGET */
/* Fam 16h MSRs */
#define MSR_F16H_L2I_PERF_CTL 0xc0010230
#define MSR_F16H_L2I_PERF_CTR 0xc0010231
/* Fam 15h MSRs */ /* Fam 15h MSRs */
#define MSR_F15H_PERF_CTL 0xc0010200 #define MSR_F15H_PERF_CTL 0xc0010200
#define MSR_F15H_PERF_CTR 0xc0010201 #define MSR_F15H_PERF_CTR 0xc0010201
......
...@@ -31,7 +31,7 @@ obj-$(CONFIG_CPU_SUP_UMC_32) += umc.o ...@@ -31,7 +31,7 @@ obj-$(CONFIG_CPU_SUP_UMC_32) += umc.o
obj-$(CONFIG_PERF_EVENTS) += perf_event.o obj-$(CONFIG_PERF_EVENTS) += perf_event.o
ifdef CONFIG_PERF_EVENTS ifdef CONFIG_PERF_EVENTS
obj-$(CONFIG_CPU_SUP_AMD) += perf_event_amd.o obj-$(CONFIG_CPU_SUP_AMD) += perf_event_amd.o perf_event_amd_uncore.o
obj-$(CONFIG_CPU_SUP_INTEL) += perf_event_p6.o perf_event_knc.o perf_event_p4.o obj-$(CONFIG_CPU_SUP_INTEL) += perf_event_p6.o perf_event_knc.o perf_event_p4.o
obj-$(CONFIG_CPU_SUP_INTEL) += perf_event_intel_lbr.o perf_event_intel_ds.o perf_event_intel.o obj-$(CONFIG_CPU_SUP_INTEL) += perf_event_intel_lbr.o perf_event_intel_ds.o perf_event_intel.o
obj-$(CONFIG_CPU_SUP_INTEL) += perf_event_intel_uncore.o obj-$(CONFIG_CPU_SUP_INTEL) += perf_event_intel_uncore.o
......
...@@ -180,8 +180,9 @@ static void release_pmc_hardware(void) {} ...@@ -180,8 +180,9 @@ static void release_pmc_hardware(void) {}
static bool check_hw_exists(void) static bool check_hw_exists(void)
{ {
u64 val, val_new = ~0; u64 val, val_fail, val_new= ~0;
int i, reg, ret = 0; int i, reg, reg_fail, ret = 0;
int bios_fail = 0;
/* /*
* Check to see if the BIOS enabled any of the counters, if so * Check to see if the BIOS enabled any of the counters, if so
...@@ -192,8 +193,11 @@ static bool check_hw_exists(void) ...@@ -192,8 +193,11 @@ static bool check_hw_exists(void)
ret = rdmsrl_safe(reg, &val); ret = rdmsrl_safe(reg, &val);
if (ret) if (ret)
goto msr_fail; goto msr_fail;
if (val & ARCH_PERFMON_EVENTSEL_ENABLE) if (val & ARCH_PERFMON_EVENTSEL_ENABLE) {
goto bios_fail; bios_fail = 1;
val_fail = val;
reg_fail = reg;
}
} }
if (x86_pmu.num_counters_fixed) { if (x86_pmu.num_counters_fixed) {
...@@ -202,8 +206,11 @@ static bool check_hw_exists(void) ...@@ -202,8 +206,11 @@ static bool check_hw_exists(void)
if (ret) if (ret)
goto msr_fail; goto msr_fail;
for (i = 0; i < x86_pmu.num_counters_fixed; i++) { for (i = 0; i < x86_pmu.num_counters_fixed; i++) {
if (val & (0x03 << i*4)) if (val & (0x03 << i*4)) {
goto bios_fail; bios_fail = 1;
val_fail = val;
reg_fail = reg;
}
} }
} }
...@@ -221,14 +228,13 @@ static bool check_hw_exists(void) ...@@ -221,14 +228,13 @@ static bool check_hw_exists(void)
if (ret || val != val_new) if (ret || val != val_new)
goto msr_fail; goto msr_fail;
return true;
bios_fail:
/* /*
* We still allow the PMU driver to operate: * We still allow the PMU driver to operate:
*/ */
if (bios_fail) {
printk(KERN_CONT "Broken BIOS detected, complain to your hardware vendor.\n"); printk(KERN_CONT "Broken BIOS detected, complain to your hardware vendor.\n");
printk(KERN_ERR FW_BUG "the BIOS has corrupted hw-PMU resources (MSR %x is %Lx)\n", reg, val); printk(KERN_ERR FW_BUG "the BIOS has corrupted hw-PMU resources (MSR %x is %Lx)\n", reg_fail, val_fail);
}
return true; return true;
...@@ -1316,9 +1322,16 @@ static struct attribute_group x86_pmu_format_group = { ...@@ -1316,9 +1322,16 @@ static struct attribute_group x86_pmu_format_group = {
*/ */
static void __init filter_events(struct attribute **attrs) static void __init filter_events(struct attribute **attrs)
{ {
struct device_attribute *d;
struct perf_pmu_events_attr *pmu_attr;
int i, j; int i, j;
for (i = 0; attrs[i]; i++) { for (i = 0; attrs[i]; i++) {
d = (struct device_attribute *)attrs[i];
pmu_attr = container_of(d, struct perf_pmu_events_attr, attr);
/* str trumps id */
if (pmu_attr->event_str)
continue;
if (x86_pmu.event_map(i)) if (x86_pmu.event_map(i))
continue; continue;
...@@ -1330,22 +1343,45 @@ static void __init filter_events(struct attribute **attrs) ...@@ -1330,22 +1343,45 @@ static void __init filter_events(struct attribute **attrs)
} }
} }
static ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr, /* Merge two pointer arrays */
static __init struct attribute **merge_attr(struct attribute **a, struct attribute **b)
{
struct attribute **new;
int j, i;
for (j = 0; a[j]; j++)
;
for (i = 0; b[i]; i++)
j++;
j++;
new = kmalloc(sizeof(struct attribute *) * j, GFP_KERNEL);
if (!new)
return NULL;
j = 0;
for (i = 0; a[i]; i++)
new[j++] = a[i];
for (i = 0; b[i]; i++)
new[j++] = b[i];
new[j] = NULL;
return new;
}
ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr,
char *page) char *page)
{ {
struct perf_pmu_events_attr *pmu_attr = \ struct perf_pmu_events_attr *pmu_attr = \
container_of(attr, struct perf_pmu_events_attr, attr); container_of(attr, struct perf_pmu_events_attr, attr);
u64 config = x86_pmu.event_map(pmu_attr->id); u64 config = x86_pmu.event_map(pmu_attr->id);
return x86_pmu.events_sysfs_show(page, config);
}
#define EVENT_VAR(_id) event_attr_##_id /* string trumps id */
#define EVENT_PTR(_id) &event_attr_##_id.attr.attr if (pmu_attr->event_str)
return sprintf(page, "%s", pmu_attr->event_str);
#define EVENT_ATTR(_name, _id) \ return x86_pmu.events_sysfs_show(page, config);
PMU_EVENT_ATTR(_name, EVENT_VAR(_id), PERF_COUNT_HW_##_id, \ }
events_sysfs_show)
EVENT_ATTR(cpu-cycles, CPU_CYCLES ); EVENT_ATTR(cpu-cycles, CPU_CYCLES );
EVENT_ATTR(instructions, INSTRUCTIONS ); EVENT_ATTR(instructions, INSTRUCTIONS );
...@@ -1459,16 +1495,27 @@ static int __init init_hw_perf_events(void) ...@@ -1459,16 +1495,27 @@ static int __init init_hw_perf_events(void)
unconstrained = (struct event_constraint) unconstrained = (struct event_constraint)
__EVENT_CONSTRAINT(0, (1ULL << x86_pmu.num_counters) - 1, __EVENT_CONSTRAINT(0, (1ULL << x86_pmu.num_counters) - 1,
0, x86_pmu.num_counters, 0); 0, x86_pmu.num_counters, 0, 0);
x86_pmu.attr_rdpmc = 1; /* enable userspace RDPMC usage by default */ x86_pmu.attr_rdpmc = 1; /* enable userspace RDPMC usage by default */
x86_pmu_format_group.attrs = x86_pmu.format_attrs; x86_pmu_format_group.attrs = x86_pmu.format_attrs;
if (x86_pmu.event_attrs)
x86_pmu_events_group.attrs = x86_pmu.event_attrs;
if (!x86_pmu.events_sysfs_show) if (!x86_pmu.events_sysfs_show)
x86_pmu_events_group.attrs = &empty_attrs; x86_pmu_events_group.attrs = &empty_attrs;
else else
filter_events(x86_pmu_events_group.attrs); filter_events(x86_pmu_events_group.attrs);
if (x86_pmu.cpu_events) {
struct attribute **tmp;
tmp = merge_attr(x86_pmu_events_group.attrs, x86_pmu.cpu_events);
if (!WARN_ON(!tmp))
x86_pmu_events_group.attrs = tmp;
}
pr_info("... version: %d\n", x86_pmu.version); pr_info("... version: %d\n", x86_pmu.version);
pr_info("... bit width: %d\n", x86_pmu.cntval_bits); pr_info("... bit width: %d\n", x86_pmu.cntval_bits);
pr_info("... generic registers: %d\n", x86_pmu.num_counters); pr_info("... generic registers: %d\n", x86_pmu.num_counters);
......
...@@ -46,6 +46,7 @@ enum extra_reg_type { ...@@ -46,6 +46,7 @@ enum extra_reg_type {
EXTRA_REG_RSP_0 = 0, /* offcore_response_0 */ EXTRA_REG_RSP_0 = 0, /* offcore_response_0 */
EXTRA_REG_RSP_1 = 1, /* offcore_response_1 */ EXTRA_REG_RSP_1 = 1, /* offcore_response_1 */
EXTRA_REG_LBR = 2, /* lbr_select */ EXTRA_REG_LBR = 2, /* lbr_select */
EXTRA_REG_LDLAT = 3, /* ld_lat_threshold */
EXTRA_REG_MAX /* number of entries needed */ EXTRA_REG_MAX /* number of entries needed */
}; };
...@@ -59,7 +60,13 @@ struct event_constraint { ...@@ -59,7 +60,13 @@ struct event_constraint {
u64 cmask; u64 cmask;
int weight; int weight;
int overlap; int overlap;
int flags;
}; };
/*
* struct event_constraint flags
*/
#define PERF_X86_EVENT_PEBS_LDLAT 0x1 /* ld+ldlat data address sampling */
#define PERF_X86_EVENT_PEBS_ST 0x2 /* st data address sampling */
struct amd_nb { struct amd_nb {
int nb_id; /* NorthBridge id */ int nb_id; /* NorthBridge id */
...@@ -170,16 +177,17 @@ struct cpu_hw_events { ...@@ -170,16 +177,17 @@ struct cpu_hw_events {
void *kfree_on_online; void *kfree_on_online;
}; };
#define __EVENT_CONSTRAINT(c, n, m, w, o) {\ #define __EVENT_CONSTRAINT(c, n, m, w, o, f) {\
{ .idxmsk64 = (n) }, \ { .idxmsk64 = (n) }, \
.code = (c), \ .code = (c), \
.cmask = (m), \ .cmask = (m), \
.weight = (w), \ .weight = (w), \
.overlap = (o), \ .overlap = (o), \
.flags = f, \
} }
#define EVENT_CONSTRAINT(c, n, m) \ #define EVENT_CONSTRAINT(c, n, m) \
__EVENT_CONSTRAINT(c, n, m, HWEIGHT(n), 0) __EVENT_CONSTRAINT(c, n, m, HWEIGHT(n), 0, 0)
/* /*
* The overlap flag marks event constraints with overlapping counter * The overlap flag marks event constraints with overlapping counter
...@@ -203,7 +211,7 @@ struct cpu_hw_events { ...@@ -203,7 +211,7 @@ struct cpu_hw_events {
* and its counter masks must be kept at a minimum. * and its counter masks must be kept at a minimum.
*/ */
#define EVENT_CONSTRAINT_OVERLAP(c, n, m) \ #define EVENT_CONSTRAINT_OVERLAP(c, n, m) \
__EVENT_CONSTRAINT(c, n, m, HWEIGHT(n), 1) __EVENT_CONSTRAINT(c, n, m, HWEIGHT(n), 1, 0)
/* /*
* Constraint on the Event code. * Constraint on the Event code.
...@@ -231,6 +239,14 @@ struct cpu_hw_events { ...@@ -231,6 +239,14 @@ struct cpu_hw_events {
#define INTEL_UEVENT_CONSTRAINT(c, n) \ #define INTEL_UEVENT_CONSTRAINT(c, n) \
EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK) EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK)
#define INTEL_PLD_CONSTRAINT(c, n) \
__EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK, \
HWEIGHT(n), 0, PERF_X86_EVENT_PEBS_LDLAT)
#define INTEL_PST_CONSTRAINT(c, n) \
__EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK, \
HWEIGHT(n), 0, PERF_X86_EVENT_PEBS_ST)
#define EVENT_CONSTRAINT_END \ #define EVENT_CONSTRAINT_END \
EVENT_CONSTRAINT(0, 0, 0) EVENT_CONSTRAINT(0, 0, 0)
...@@ -260,12 +276,22 @@ struct extra_reg { ...@@ -260,12 +276,22 @@ struct extra_reg {
.msr = (ms), \ .msr = (ms), \
.config_mask = (m), \ .config_mask = (m), \
.valid_mask = (vm), \ .valid_mask = (vm), \
.idx = EXTRA_REG_##i \ .idx = EXTRA_REG_##i, \
} }
#define INTEL_EVENT_EXTRA_REG(event, msr, vm, idx) \ #define INTEL_EVENT_EXTRA_REG(event, msr, vm, idx) \
EVENT_EXTRA_REG(event, msr, ARCH_PERFMON_EVENTSEL_EVENT, vm, idx) EVENT_EXTRA_REG(event, msr, ARCH_PERFMON_EVENTSEL_EVENT, vm, idx)
#define INTEL_UEVENT_EXTRA_REG(event, msr, vm, idx) \
EVENT_EXTRA_REG(event, msr, ARCH_PERFMON_EVENTSEL_EVENT | \
ARCH_PERFMON_EVENTSEL_UMASK, vm, idx)
#define INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(c) \
INTEL_UEVENT_EXTRA_REG(c, \
MSR_PEBS_LD_LAT_THRESHOLD, \
0xffff, \
LDLAT)
#define EVENT_EXTRA_END EVENT_EXTRA_REG(0, 0, 0, 0, RSP_0) #define EVENT_EXTRA_END EVENT_EXTRA_REG(0, 0, 0, 0, RSP_0)
union perf_capabilities { union perf_capabilities {
...@@ -355,8 +381,10 @@ struct x86_pmu { ...@@ -355,8 +381,10 @@ struct x86_pmu {
*/ */
int attr_rdpmc; int attr_rdpmc;
struct attribute **format_attrs; struct attribute **format_attrs;
struct attribute **event_attrs;
ssize_t (*events_sysfs_show)(char *page, u64 config); ssize_t (*events_sysfs_show)(char *page, u64 config);
struct attribute **cpu_events;
/* /*
* CPU Hotplug hooks * CPU Hotplug hooks
...@@ -421,6 +449,23 @@ do { \ ...@@ -421,6 +449,23 @@ do { \
#define ERF_NO_HT_SHARING 1 #define ERF_NO_HT_SHARING 1
#define ERF_HAS_RSP_1 2 #define ERF_HAS_RSP_1 2
#define EVENT_VAR(_id) event_attr_##_id
#define EVENT_PTR(_id) &event_attr_##_id.attr.attr
#define EVENT_ATTR(_name, _id) \
static struct perf_pmu_events_attr EVENT_VAR(_id) = { \
.attr = __ATTR(_name, 0444, events_sysfs_show, NULL), \
.id = PERF_COUNT_HW_##_id, \
.event_str = NULL, \
};
#define EVENT_ATTR_STR(_name, v, str) \
static struct perf_pmu_events_attr event_attr_##v = { \
.attr = __ATTR(_name, 0444, events_sysfs_show, NULL), \
.id = 0, \
.event_str = str, \
};
extern struct x86_pmu x86_pmu __read_mostly; extern struct x86_pmu x86_pmu __read_mostly;
DECLARE_PER_CPU(struct cpu_hw_events, cpu_hw_events); DECLARE_PER_CPU(struct cpu_hw_events, cpu_hw_events);
...@@ -628,6 +673,9 @@ int p6_pmu_init(void); ...@@ -628,6 +673,9 @@ int p6_pmu_init(void);
int knc_pmu_init(void); int knc_pmu_init(void);
ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr,
char *page);
#else /* CONFIG_CPU_SUP_INTEL */ #else /* CONFIG_CPU_SUP_INTEL */
static inline void reserve_ds_buffers(void) static inline void reserve_ds_buffers(void)
......
...@@ -132,14 +132,11 @@ static u64 amd_pmu_event_map(int hw_event) ...@@ -132,14 +132,11 @@ static u64 amd_pmu_event_map(int hw_event)
return amd_perfmon_event_map[hw_event]; return amd_perfmon_event_map[hw_event];
} }
static struct event_constraint *amd_nb_event_constraint;
/* /*
* Previously calculated offsets * Previously calculated offsets
*/ */
static unsigned int event_offsets[X86_PMC_IDX_MAX] __read_mostly; static unsigned int event_offsets[X86_PMC_IDX_MAX] __read_mostly;
static unsigned int count_offsets[X86_PMC_IDX_MAX] __read_mostly; static unsigned int count_offsets[X86_PMC_IDX_MAX] __read_mostly;
static unsigned int rdpmc_indexes[X86_PMC_IDX_MAX] __read_mostly;
/* /*
* Legacy CPUs: * Legacy CPUs:
...@@ -147,14 +144,10 @@ static unsigned int rdpmc_indexes[X86_PMC_IDX_MAX] __read_mostly; ...@@ -147,14 +144,10 @@ static unsigned int rdpmc_indexes[X86_PMC_IDX_MAX] __read_mostly;
* *
* CPUs with core performance counter extensions: * CPUs with core performance counter extensions:
* 6 counters starting at 0xc0010200 each offset by 2 * 6 counters starting at 0xc0010200 each offset by 2
*
* CPUs with north bridge performance counter extensions:
* 4 additional counters starting at 0xc0010240 each offset by 2
* (indexed right above either one of the above core counters)
*/ */
static inline int amd_pmu_addr_offset(int index, bool eventsel) static inline int amd_pmu_addr_offset(int index, bool eventsel)
{ {
int offset, first, base; int offset;
if (!index) if (!index)
return index; return index;
...@@ -167,23 +160,7 @@ static inline int amd_pmu_addr_offset(int index, bool eventsel) ...@@ -167,23 +160,7 @@ static inline int amd_pmu_addr_offset(int index, bool eventsel)
if (offset) if (offset)
return offset; return offset;
if (amd_nb_event_constraint && if (!cpu_has_perfctr_core)
test_bit(index, amd_nb_event_constraint->idxmsk)) {
/*
* calculate the offset of NB counters with respect to
* base eventsel or perfctr
*/
first = find_first_bit(amd_nb_event_constraint->idxmsk,
X86_PMC_IDX_MAX);
if (eventsel)
base = MSR_F15H_NB_PERF_CTL - x86_pmu.eventsel;
else
base = MSR_F15H_NB_PERF_CTR - x86_pmu.perfctr;
offset = base + ((index - first) << 1);
} else if (!cpu_has_perfctr_core)
offset = index; offset = index;
else else
offset = index << 1; offset = index << 1;
...@@ -196,36 +173,6 @@ static inline int amd_pmu_addr_offset(int index, bool eventsel) ...@@ -196,36 +173,6 @@ static inline int amd_pmu_addr_offset(int index, bool eventsel)
return offset; return offset;
} }
static inline int amd_pmu_rdpmc_index(int index)
{
int ret, first;
if (!index)
return index;
ret = rdpmc_indexes[index];
if (ret)
return ret;
if (amd_nb_event_constraint &&
test_bit(index, amd_nb_event_constraint->idxmsk)) {
/*
* according to the mnual, ECX value of the NB counters is
* the index of the NB counter (0, 1, 2 or 3) plus 6
*/
first = find_first_bit(amd_nb_event_constraint->idxmsk,
X86_PMC_IDX_MAX);
ret = index - first + 6;
} else
ret = index;
rdpmc_indexes[index] = ret;
return ret;
}
static int amd_core_hw_config(struct perf_event *event) static int amd_core_hw_config(struct perf_event *event)
{ {
if (event->attr.exclude_host && event->attr.exclude_guest) if (event->attr.exclude_host && event->attr.exclude_guest)
...@@ -244,34 +191,6 @@ static int amd_core_hw_config(struct perf_event *event) ...@@ -244,34 +191,6 @@ static int amd_core_hw_config(struct perf_event *event)
return 0; return 0;
} }
/*
* NB counters do not support the following event select bits:
* Host/Guest only
* Counter mask
* Invert counter mask
* Edge detect
* OS/User mode
*/
static int amd_nb_hw_config(struct perf_event *event)
{
/* for NB, we only allow system wide counting mode */
if (is_sampling_event(event) || event->attach_state & PERF_ATTACH_TASK)
return -EINVAL;
if (event->attr.exclude_user || event->attr.exclude_kernel ||
event->attr.exclude_host || event->attr.exclude_guest)
return -EINVAL;
event->hw.config &= ~(ARCH_PERFMON_EVENTSEL_USR |
ARCH_PERFMON_EVENTSEL_OS);
if (event->hw.config & ~(AMD64_RAW_EVENT_MASK_NB |
ARCH_PERFMON_EVENTSEL_INT))
return -EINVAL;
return 0;
}
/* /*
* AMD64 events are detected based on their event codes. * AMD64 events are detected based on their event codes.
*/ */
...@@ -285,11 +204,6 @@ static inline int amd_is_nb_event(struct hw_perf_event *hwc) ...@@ -285,11 +204,6 @@ static inline int amd_is_nb_event(struct hw_perf_event *hwc)
return (hwc->config & 0xe0) == 0xe0; return (hwc->config & 0xe0) == 0xe0;
} }
static inline int amd_is_perfctr_nb_event(struct hw_perf_event *hwc)
{
return amd_nb_event_constraint && amd_is_nb_event(hwc);
}
static inline int amd_has_nb(struct cpu_hw_events *cpuc) static inline int amd_has_nb(struct cpu_hw_events *cpuc)
{ {
struct amd_nb *nb = cpuc->amd_nb; struct amd_nb *nb = cpuc->amd_nb;
...@@ -315,9 +229,6 @@ static int amd_pmu_hw_config(struct perf_event *event) ...@@ -315,9 +229,6 @@ static int amd_pmu_hw_config(struct perf_event *event)
if (event->attr.type == PERF_TYPE_RAW) if (event->attr.type == PERF_TYPE_RAW)
event->hw.config |= event->attr.config & AMD64_RAW_EVENT_MASK; event->hw.config |= event->attr.config & AMD64_RAW_EVENT_MASK;
if (amd_is_perfctr_nb_event(&event->hw))
return amd_nb_hw_config(event);
return amd_core_hw_config(event); return amd_core_hw_config(event);
} }
...@@ -341,19 +252,6 @@ static void __amd_put_nb_event_constraints(struct cpu_hw_events *cpuc, ...@@ -341,19 +252,6 @@ static void __amd_put_nb_event_constraints(struct cpu_hw_events *cpuc,
} }
} }
static void amd_nb_interrupt_hw_config(struct hw_perf_event *hwc)
{
int core_id = cpu_data(smp_processor_id()).cpu_core_id;
/* deliver interrupts only to this core */
if (hwc->config & ARCH_PERFMON_EVENTSEL_INT) {
hwc->config |= AMD64_EVENTSEL_INT_CORE_ENABLE;
hwc->config &= ~AMD64_EVENTSEL_INT_CORE_SEL_MASK;
hwc->config |= (u64)(core_id) <<
AMD64_EVENTSEL_INT_CORE_SEL_SHIFT;
}
}
/* /*
* AMD64 NorthBridge events need special treatment because * AMD64 NorthBridge events need special treatment because
* counter access needs to be synchronized across all cores * counter access needs to be synchronized across all cores
...@@ -441,9 +339,6 @@ __amd_get_nb_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *ev ...@@ -441,9 +339,6 @@ __amd_get_nb_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *ev
if (new == -1) if (new == -1)
return &emptyconstraint; return &emptyconstraint;
if (amd_is_perfctr_nb_event(hwc))
amd_nb_interrupt_hw_config(hwc);
return &nb->event_constraints[new]; return &nb->event_constraints[new];
} }
...@@ -543,8 +438,7 @@ amd_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event) ...@@ -543,8 +438,7 @@ amd_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event)
if (!(amd_has_nb(cpuc) && amd_is_nb_event(&event->hw))) if (!(amd_has_nb(cpuc) && amd_is_nb_event(&event->hw)))
return &unconstrained; return &unconstrained;
return __amd_get_nb_event_constraints(cpuc, event, return __amd_get_nb_event_constraints(cpuc, event, NULL);
amd_nb_event_constraint);
} }
static void amd_put_event_constraints(struct cpu_hw_events *cpuc, static void amd_put_event_constraints(struct cpu_hw_events *cpuc,
...@@ -643,9 +537,6 @@ static struct event_constraint amd_f15_PMC30 = EVENT_CONSTRAINT_OVERLAP(0, 0x09, ...@@ -643,9 +537,6 @@ static struct event_constraint amd_f15_PMC30 = EVENT_CONSTRAINT_OVERLAP(0, 0x09,
static struct event_constraint amd_f15_PMC50 = EVENT_CONSTRAINT(0, 0x3F, 0); static struct event_constraint amd_f15_PMC50 = EVENT_CONSTRAINT(0, 0x3F, 0);
static struct event_constraint amd_f15_PMC53 = EVENT_CONSTRAINT(0, 0x38, 0); static struct event_constraint amd_f15_PMC53 = EVENT_CONSTRAINT(0, 0x38, 0);
static struct event_constraint amd_NBPMC96 = EVENT_CONSTRAINT(0, 0x3C0, 0);
static struct event_constraint amd_NBPMC74 = EVENT_CONSTRAINT(0, 0xF0, 0);
static struct event_constraint * static struct event_constraint *
amd_get_event_constraints_f15h(struct cpu_hw_events *cpuc, struct perf_event *event) amd_get_event_constraints_f15h(struct cpu_hw_events *cpuc, struct perf_event *event)
{ {
...@@ -711,8 +602,8 @@ amd_get_event_constraints_f15h(struct cpu_hw_events *cpuc, struct perf_event *ev ...@@ -711,8 +602,8 @@ amd_get_event_constraints_f15h(struct cpu_hw_events *cpuc, struct perf_event *ev
return &amd_f15_PMC20; return &amd_f15_PMC20;
} }
case AMD_EVENT_NB: case AMD_EVENT_NB:
return __amd_get_nb_event_constraints(cpuc, event, /* moved to perf_event_amd_uncore.c */
amd_nb_event_constraint); return &emptyconstraint;
default: default:
return &emptyconstraint; return &emptyconstraint;
} }
...@@ -738,7 +629,6 @@ static __initconst const struct x86_pmu amd_pmu = { ...@@ -738,7 +629,6 @@ static __initconst const struct x86_pmu amd_pmu = {
.eventsel = MSR_K7_EVNTSEL0, .eventsel = MSR_K7_EVNTSEL0,
.perfctr = MSR_K7_PERFCTR0, .perfctr = MSR_K7_PERFCTR0,
.addr_offset = amd_pmu_addr_offset, .addr_offset = amd_pmu_addr_offset,
.rdpmc_index = amd_pmu_rdpmc_index,
.event_map = amd_pmu_event_map, .event_map = amd_pmu_event_map,
.max_events = ARRAY_SIZE(amd_perfmon_event_map), .max_events = ARRAY_SIZE(amd_perfmon_event_map),
.num_counters = AMD64_NUM_COUNTERS, .num_counters = AMD64_NUM_COUNTERS,
...@@ -790,23 +680,6 @@ static int setup_perfctr_core(void) ...@@ -790,23 +680,6 @@ static int setup_perfctr_core(void)
return 0; return 0;
} }
static int setup_perfctr_nb(void)
{
if (!cpu_has_perfctr_nb)
return -ENODEV;
x86_pmu.num_counters += AMD64_NUM_COUNTERS_NB;
if (cpu_has_perfctr_core)
amd_nb_event_constraint = &amd_NBPMC96;
else
amd_nb_event_constraint = &amd_NBPMC74;
printk(KERN_INFO "perf: AMD northbridge performance counters detected\n");
return 0;
}
__init int amd_pmu_init(void) __init int amd_pmu_init(void)
{ {
/* Performance-monitoring supported from K7 and later: */ /* Performance-monitoring supported from K7 and later: */
...@@ -817,7 +690,6 @@ __init int amd_pmu_init(void) ...@@ -817,7 +690,6 @@ __init int amd_pmu_init(void)
setup_event_constraints(); setup_event_constraints();
setup_perfctr_core(); setup_perfctr_core();
setup_perfctr_nb();
/* Events are common for all AMDs */ /* Events are common for all AMDs */
memcpy(hw_cache_event_ids, amd_hw_cache_event_ids, memcpy(hw_cache_event_ids, amd_hw_cache_event_ids,
......
This diff is collapsed.
...@@ -81,6 +81,7 @@ static struct event_constraint intel_nehalem_event_constraints[] __read_mostly = ...@@ -81,6 +81,7 @@ static struct event_constraint intel_nehalem_event_constraints[] __read_mostly =
static struct extra_reg intel_nehalem_extra_regs[] __read_mostly = static struct extra_reg intel_nehalem_extra_regs[] __read_mostly =
{ {
INTEL_EVENT_EXTRA_REG(0xb7, MSR_OFFCORE_RSP_0, 0xffff, RSP_0), INTEL_EVENT_EXTRA_REG(0xb7, MSR_OFFCORE_RSP_0, 0xffff, RSP_0),
INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x100b),
EVENT_EXTRA_END EVENT_EXTRA_END
}; };
...@@ -108,6 +109,8 @@ static struct event_constraint intel_snb_event_constraints[] __read_mostly = ...@@ -108,6 +109,8 @@ static struct event_constraint intel_snb_event_constraints[] __read_mostly =
INTEL_EVENT_CONSTRAINT(0x48, 0x4), /* L1D_PEND_MISS.PENDING */ INTEL_EVENT_CONSTRAINT(0x48, 0x4), /* L1D_PEND_MISS.PENDING */
INTEL_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PREC_DIST */ INTEL_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PREC_DIST */
INTEL_EVENT_CONSTRAINT(0xcd, 0x8), /* MEM_TRANS_RETIRED.LOAD_LATENCY */ INTEL_EVENT_CONSTRAINT(0xcd, 0x8), /* MEM_TRANS_RETIRED.LOAD_LATENCY */
INTEL_UEVENT_CONSTRAINT(0x04a3, 0xf), /* CYCLE_ACTIVITY.CYCLES_NO_DISPATCH */
INTEL_UEVENT_CONSTRAINT(0x02a3, 0x4), /* CYCLE_ACTIVITY.CYCLES_L1D_PENDING */
EVENT_CONSTRAINT_END EVENT_CONSTRAINT_END
}; };
...@@ -136,6 +139,7 @@ static struct extra_reg intel_westmere_extra_regs[] __read_mostly = ...@@ -136,6 +139,7 @@ static struct extra_reg intel_westmere_extra_regs[] __read_mostly =
{ {
INTEL_EVENT_EXTRA_REG(0xb7, MSR_OFFCORE_RSP_0, 0xffff, RSP_0), INTEL_EVENT_EXTRA_REG(0xb7, MSR_OFFCORE_RSP_0, 0xffff, RSP_0),
INTEL_EVENT_EXTRA_REG(0xbb, MSR_OFFCORE_RSP_1, 0xffff, RSP_1), INTEL_EVENT_EXTRA_REG(0xbb, MSR_OFFCORE_RSP_1, 0xffff, RSP_1),
INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x100b),
EVENT_EXTRA_END EVENT_EXTRA_END
}; };
...@@ -155,6 +159,8 @@ static struct event_constraint intel_gen_event_constraints[] __read_mostly = ...@@ -155,6 +159,8 @@ static struct event_constraint intel_gen_event_constraints[] __read_mostly =
static struct extra_reg intel_snb_extra_regs[] __read_mostly = { static struct extra_reg intel_snb_extra_regs[] __read_mostly = {
INTEL_EVENT_EXTRA_REG(0xb7, MSR_OFFCORE_RSP_0, 0x3f807f8fffull, RSP_0), INTEL_EVENT_EXTRA_REG(0xb7, MSR_OFFCORE_RSP_0, 0x3f807f8fffull, RSP_0),
INTEL_EVENT_EXTRA_REG(0xbb, MSR_OFFCORE_RSP_1, 0x3f807f8fffull, RSP_1), INTEL_EVENT_EXTRA_REG(0xbb, MSR_OFFCORE_RSP_1, 0x3f807f8fffull, RSP_1),
INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd),
INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd),
EVENT_EXTRA_END EVENT_EXTRA_END
}; };
...@@ -164,6 +170,21 @@ static struct extra_reg intel_snbep_extra_regs[] __read_mostly = { ...@@ -164,6 +170,21 @@ static struct extra_reg intel_snbep_extra_regs[] __read_mostly = {
EVENT_EXTRA_END EVENT_EXTRA_END
}; };
EVENT_ATTR_STR(mem-loads, mem_ld_nhm, "event=0x0b,umask=0x10,ldlat=3");
EVENT_ATTR_STR(mem-loads, mem_ld_snb, "event=0xcd,umask=0x1,ldlat=3");
EVENT_ATTR_STR(mem-stores, mem_st_snb, "event=0xcd,umask=0x2");
struct attribute *nhm_events_attrs[] = {
EVENT_PTR(mem_ld_nhm),
NULL,
};
struct attribute *snb_events_attrs[] = {
EVENT_PTR(mem_ld_snb),
EVENT_PTR(mem_st_snb),
NULL,
};
static u64 intel_pmu_event_map(int hw_event) static u64 intel_pmu_event_map(int hw_event)
{ {
return intel_perfmon_event_map[hw_event]; return intel_perfmon_event_map[hw_event];
...@@ -1398,10 +1419,13 @@ x86_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event) ...@@ -1398,10 +1419,13 @@ x86_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event)
if (x86_pmu.event_constraints) { if (x86_pmu.event_constraints) {
for_each_event_constraint(c, x86_pmu.event_constraints) { for_each_event_constraint(c, x86_pmu.event_constraints) {
if ((event->hw.config & c->cmask) == c->code) if ((event->hw.config & c->cmask) == c->code) {
/* hw.flags zeroed at initialization */
event->hw.flags |= c->flags;
return c; return c;
} }
} }
}
return &unconstrained; return &unconstrained;
} }
...@@ -1444,6 +1468,7 @@ intel_put_shared_regs_event_constraints(struct cpu_hw_events *cpuc, ...@@ -1444,6 +1468,7 @@ intel_put_shared_regs_event_constraints(struct cpu_hw_events *cpuc,
static void intel_put_event_constraints(struct cpu_hw_events *cpuc, static void intel_put_event_constraints(struct cpu_hw_events *cpuc,
struct perf_event *event) struct perf_event *event)
{ {
event->hw.flags = 0;
intel_put_shared_regs_event_constraints(cpuc, event); intel_put_shared_regs_event_constraints(cpuc, event);
} }
...@@ -1767,6 +1792,8 @@ static void intel_pmu_flush_branch_stack(void) ...@@ -1767,6 +1792,8 @@ static void intel_pmu_flush_branch_stack(void)
PMU_FORMAT_ATTR(offcore_rsp, "config1:0-63"); PMU_FORMAT_ATTR(offcore_rsp, "config1:0-63");
PMU_FORMAT_ATTR(ldlat, "config1:0-15");
static struct attribute *intel_arch3_formats_attr[] = { static struct attribute *intel_arch3_formats_attr[] = {
&format_attr_event.attr, &format_attr_event.attr,
&format_attr_umask.attr, &format_attr_umask.attr,
...@@ -1777,6 +1804,7 @@ static struct attribute *intel_arch3_formats_attr[] = { ...@@ -1777,6 +1804,7 @@ static struct attribute *intel_arch3_formats_attr[] = {
&format_attr_cmask.attr, &format_attr_cmask.attr,
&format_attr_offcore_rsp.attr, /* XXX do NHM/WSM + SNB breakout */ &format_attr_offcore_rsp.attr, /* XXX do NHM/WSM + SNB breakout */
&format_attr_ldlat.attr, /* PEBS load latency */
NULL, NULL,
}; };
...@@ -2037,6 +2065,8 @@ __init int intel_pmu_init(void) ...@@ -2037,6 +2065,8 @@ __init int intel_pmu_init(void)
x86_pmu.enable_all = intel_pmu_nhm_enable_all; x86_pmu.enable_all = intel_pmu_nhm_enable_all;
x86_pmu.extra_regs = intel_nehalem_extra_regs; x86_pmu.extra_regs = intel_nehalem_extra_regs;
x86_pmu.cpu_events = nhm_events_attrs;
/* UOPS_ISSUED.STALLED_CYCLES */ /* UOPS_ISSUED.STALLED_CYCLES */
intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] =
X86_CONFIG(.event=0x0e, .umask=0x01, .inv=1, .cmask=1); X86_CONFIG(.event=0x0e, .umask=0x01, .inv=1, .cmask=1);
...@@ -2080,6 +2110,8 @@ __init int intel_pmu_init(void) ...@@ -2080,6 +2110,8 @@ __init int intel_pmu_init(void)
x86_pmu.extra_regs = intel_westmere_extra_regs; x86_pmu.extra_regs = intel_westmere_extra_regs;
x86_pmu.er_flags |= ERF_HAS_RSP_1; x86_pmu.er_flags |= ERF_HAS_RSP_1;
x86_pmu.cpu_events = nhm_events_attrs;
/* UOPS_ISSUED.STALLED_CYCLES */ /* UOPS_ISSUED.STALLED_CYCLES */
intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] =
X86_CONFIG(.event=0x0e, .umask=0x01, .inv=1, .cmask=1); X86_CONFIG(.event=0x0e, .umask=0x01, .inv=1, .cmask=1);
...@@ -2111,6 +2143,8 @@ __init int intel_pmu_init(void) ...@@ -2111,6 +2143,8 @@ __init int intel_pmu_init(void)
x86_pmu.er_flags |= ERF_HAS_RSP_1; x86_pmu.er_flags |= ERF_HAS_RSP_1;
x86_pmu.er_flags |= ERF_NO_HT_SHARING; x86_pmu.er_flags |= ERF_NO_HT_SHARING;
x86_pmu.cpu_events = snb_events_attrs;
/* UOPS_ISSUED.ANY,c=1,i=1 to count stall cycles */ /* UOPS_ISSUED.ANY,c=1,i=1 to count stall cycles */
intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] =
X86_CONFIG(.event=0x0e, .umask=0x01, .inv=1, .cmask=1); X86_CONFIG(.event=0x0e, .umask=0x01, .inv=1, .cmask=1);
...@@ -2140,6 +2174,8 @@ __init int intel_pmu_init(void) ...@@ -2140,6 +2174,8 @@ __init int intel_pmu_init(void)
x86_pmu.er_flags |= ERF_HAS_RSP_1; x86_pmu.er_flags |= ERF_HAS_RSP_1;
x86_pmu.er_flags |= ERF_NO_HT_SHARING; x86_pmu.er_flags |= ERF_NO_HT_SHARING;
x86_pmu.cpu_events = snb_events_attrs;
/* UOPS_ISSUED.ANY,c=1,i=1 to count stall cycles */ /* UOPS_ISSUED.ANY,c=1,i=1 to count stall cycles */
intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] =
X86_CONFIG(.event=0x0e, .umask=0x01, .inv=1, .cmask=1); X86_CONFIG(.event=0x0e, .umask=0x01, .inv=1, .cmask=1);
......
...@@ -24,6 +24,130 @@ struct pebs_record_32 { ...@@ -24,6 +24,130 @@ struct pebs_record_32 {
*/ */
union intel_x86_pebs_dse {
u64 val;
struct {
unsigned int ld_dse:4;
unsigned int ld_stlb_miss:1;
unsigned int ld_locked:1;
unsigned int ld_reserved:26;
};
struct {
unsigned int st_l1d_hit:1;
unsigned int st_reserved1:3;
unsigned int st_stlb_miss:1;
unsigned int st_locked:1;
unsigned int st_reserved2:26;
};
};
/*
* Map PEBS Load Latency Data Source encodings to generic
* memory data source information
*/
#define P(a, b) PERF_MEM_S(a, b)
#define OP_LH (P(OP, LOAD) | P(LVL, HIT))
#define SNOOP_NONE_MISS (P(SNOOP, NONE) | P(SNOOP, MISS))
static const u64 pebs_data_source[] = {
P(OP, LOAD) | P(LVL, MISS) | P(LVL, L3) | P(SNOOP, NA),/* 0x00:ukn L3 */
OP_LH | P(LVL, L1) | P(SNOOP, NONE), /* 0x01: L1 local */
OP_LH | P(LVL, LFB) | P(SNOOP, NONE), /* 0x02: LFB hit */
OP_LH | P(LVL, L2) | P(SNOOP, NONE), /* 0x03: L2 hit */
OP_LH | P(LVL, L3) | P(SNOOP, NONE), /* 0x04: L3 hit */
OP_LH | P(LVL, L3) | P(SNOOP, MISS), /* 0x05: L3 hit, snoop miss */
OP_LH | P(LVL, L3) | P(SNOOP, HIT), /* 0x06: L3 hit, snoop hit */
OP_LH | P(LVL, L3) | P(SNOOP, HITM), /* 0x07: L3 hit, snoop hitm */
OP_LH | P(LVL, REM_CCE1) | P(SNOOP, HIT), /* 0x08: L3 miss snoop hit */
OP_LH | P(LVL, REM_CCE1) | P(SNOOP, HITM), /* 0x09: L3 miss snoop hitm*/
OP_LH | P(LVL, LOC_RAM) | P(SNOOP, HIT), /* 0x0a: L3 miss, shared */
OP_LH | P(LVL, REM_RAM1) | P(SNOOP, HIT), /* 0x0b: L3 miss, shared */
OP_LH | P(LVL, LOC_RAM) | SNOOP_NONE_MISS,/* 0x0c: L3 miss, excl */
OP_LH | P(LVL, REM_RAM1) | SNOOP_NONE_MISS,/* 0x0d: L3 miss, excl */
OP_LH | P(LVL, IO) | P(SNOOP, NONE), /* 0x0e: I/O */
OP_LH | P(LVL, UNC) | P(SNOOP, NONE), /* 0x0f: uncached */
};
static u64 precise_store_data(u64 status)
{
union intel_x86_pebs_dse dse;
u64 val = P(OP, STORE) | P(SNOOP, NA) | P(LVL, L1) | P(TLB, L2);
dse.val = status;
/*
* bit 4: TLB access
* 1 = stored missed 2nd level TLB
*
* so it either hit the walker or the OS
* otherwise hit 2nd level TLB
*/
if (dse.st_stlb_miss)
val |= P(TLB, MISS);
else
val |= P(TLB, HIT);
/*
* bit 0: hit L1 data cache
* if not set, then all we know is that
* it missed L1D
*/
if (dse.st_l1d_hit)
val |= P(LVL, HIT);
else
val |= P(LVL, MISS);
/*
* bit 5: Locked prefix
*/
if (dse.st_locked)
val |= P(LOCK, LOCKED);
return val;
}
static u64 load_latency_data(u64 status)
{
union intel_x86_pebs_dse dse;
u64 val;
int model = boot_cpu_data.x86_model;
int fam = boot_cpu_data.x86;
dse.val = status;
/*
* use the mapping table for bit 0-3
*/
val = pebs_data_source[dse.ld_dse];
/*
* Nehalem models do not support TLB, Lock infos
*/
if (fam == 0x6 && (model == 26 || model == 30
|| model == 31 || model == 46)) {
val |= P(TLB, NA) | P(LOCK, NA);
return val;
}
/*
* bit 4: TLB access
* 0 = did not miss 2nd level TLB
* 1 = missed 2nd level TLB
*/
if (dse.ld_stlb_miss)
val |= P(TLB, MISS) | P(TLB, L2);
else
val |= P(TLB, HIT) | P(TLB, L1) | P(TLB, L2);
/*
* bit 5: locked prefix
*/
if (dse.ld_locked)
val |= P(LOCK, LOCKED);
return val;
}
struct pebs_record_core { struct pebs_record_core {
u64 flags, ip; u64 flags, ip;
u64 ax, bx, cx, dx; u64 ax, bx, cx, dx;
...@@ -365,7 +489,7 @@ struct event_constraint intel_atom_pebs_event_constraints[] = { ...@@ -365,7 +489,7 @@ struct event_constraint intel_atom_pebs_event_constraints[] = {
}; };
struct event_constraint intel_nehalem_pebs_event_constraints[] = { struct event_constraint intel_nehalem_pebs_event_constraints[] = {
INTEL_EVENT_CONSTRAINT(0x0b, 0xf), /* MEM_INST_RETIRED.* */ INTEL_PLD_CONSTRAINT(0x100b, 0xf), /* MEM_INST_RETIRED.* */
INTEL_EVENT_CONSTRAINT(0x0f, 0xf), /* MEM_UNCORE_RETIRED.* */ INTEL_EVENT_CONSTRAINT(0x0f, 0xf), /* MEM_UNCORE_RETIRED.* */
INTEL_UEVENT_CONSTRAINT(0x010c, 0xf), /* MEM_STORE_RETIRED.DTLB_MISS */ INTEL_UEVENT_CONSTRAINT(0x010c, 0xf), /* MEM_STORE_RETIRED.DTLB_MISS */
INTEL_EVENT_CONSTRAINT(0xc0, 0xf), /* INST_RETIRED.ANY */ INTEL_EVENT_CONSTRAINT(0xc0, 0xf), /* INST_RETIRED.ANY */
...@@ -380,7 +504,7 @@ struct event_constraint intel_nehalem_pebs_event_constraints[] = { ...@@ -380,7 +504,7 @@ struct event_constraint intel_nehalem_pebs_event_constraints[] = {
}; };
struct event_constraint intel_westmere_pebs_event_constraints[] = { struct event_constraint intel_westmere_pebs_event_constraints[] = {
INTEL_EVENT_CONSTRAINT(0x0b, 0xf), /* MEM_INST_RETIRED.* */ INTEL_PLD_CONSTRAINT(0x100b, 0xf), /* MEM_INST_RETIRED.* */
INTEL_EVENT_CONSTRAINT(0x0f, 0xf), /* MEM_UNCORE_RETIRED.* */ INTEL_EVENT_CONSTRAINT(0x0f, 0xf), /* MEM_UNCORE_RETIRED.* */
INTEL_UEVENT_CONSTRAINT(0x010c, 0xf), /* MEM_STORE_RETIRED.DTLB_MISS */ INTEL_UEVENT_CONSTRAINT(0x010c, 0xf), /* MEM_STORE_RETIRED.DTLB_MISS */
INTEL_EVENT_CONSTRAINT(0xc0, 0xf), /* INSTR_RETIRED.* */ INTEL_EVENT_CONSTRAINT(0xc0, 0xf), /* INSTR_RETIRED.* */
...@@ -400,7 +524,8 @@ struct event_constraint intel_snb_pebs_event_constraints[] = { ...@@ -400,7 +524,8 @@ struct event_constraint intel_snb_pebs_event_constraints[] = {
INTEL_UEVENT_CONSTRAINT(0x02c2, 0xf), /* UOPS_RETIRED.RETIRE_SLOTS */ INTEL_UEVENT_CONSTRAINT(0x02c2, 0xf), /* UOPS_RETIRED.RETIRE_SLOTS */
INTEL_EVENT_CONSTRAINT(0xc4, 0xf), /* BR_INST_RETIRED.* */ INTEL_EVENT_CONSTRAINT(0xc4, 0xf), /* BR_INST_RETIRED.* */
INTEL_EVENT_CONSTRAINT(0xc5, 0xf), /* BR_MISP_RETIRED.* */ INTEL_EVENT_CONSTRAINT(0xc5, 0xf), /* BR_MISP_RETIRED.* */
INTEL_EVENT_CONSTRAINT(0xcd, 0x8), /* MEM_TRANS_RETIRED.* */ INTEL_PLD_CONSTRAINT(0x01cd, 0x8), /* MEM_TRANS_RETIRED.LAT_ABOVE_THR */
INTEL_PST_CONSTRAINT(0x02cd, 0x8), /* MEM_TRANS_RETIRED.PRECISE_STORES */
INTEL_EVENT_CONSTRAINT(0xd0, 0xf), /* MEM_UOP_RETIRED.* */ INTEL_EVENT_CONSTRAINT(0xd0, 0xf), /* MEM_UOP_RETIRED.* */
INTEL_EVENT_CONSTRAINT(0xd1, 0xf), /* MEM_LOAD_UOPS_RETIRED.* */ INTEL_EVENT_CONSTRAINT(0xd1, 0xf), /* MEM_LOAD_UOPS_RETIRED.* */
INTEL_EVENT_CONSTRAINT(0xd2, 0xf), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */ INTEL_EVENT_CONSTRAINT(0xd2, 0xf), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
...@@ -414,7 +539,8 @@ struct event_constraint intel_ivb_pebs_event_constraints[] = { ...@@ -414,7 +539,8 @@ struct event_constraint intel_ivb_pebs_event_constraints[] = {
INTEL_UEVENT_CONSTRAINT(0x02c2, 0xf), /* UOPS_RETIRED.RETIRE_SLOTS */ INTEL_UEVENT_CONSTRAINT(0x02c2, 0xf), /* UOPS_RETIRED.RETIRE_SLOTS */
INTEL_EVENT_CONSTRAINT(0xc4, 0xf), /* BR_INST_RETIRED.* */ INTEL_EVENT_CONSTRAINT(0xc4, 0xf), /* BR_INST_RETIRED.* */
INTEL_EVENT_CONSTRAINT(0xc5, 0xf), /* BR_MISP_RETIRED.* */ INTEL_EVENT_CONSTRAINT(0xc5, 0xf), /* BR_MISP_RETIRED.* */
INTEL_EVENT_CONSTRAINT(0xcd, 0x8), /* MEM_TRANS_RETIRED.* */ INTEL_PLD_CONSTRAINT(0x01cd, 0x8), /* MEM_TRANS_RETIRED.LAT_ABOVE_THR */
INTEL_PST_CONSTRAINT(0x02cd, 0x8), /* MEM_TRANS_RETIRED.PRECISE_STORES */
INTEL_EVENT_CONSTRAINT(0xd0, 0xf), /* MEM_UOP_RETIRED.* */ INTEL_EVENT_CONSTRAINT(0xd0, 0xf), /* MEM_UOP_RETIRED.* */
INTEL_EVENT_CONSTRAINT(0xd1, 0xf), /* MEM_LOAD_UOPS_RETIRED.* */ INTEL_EVENT_CONSTRAINT(0xd1, 0xf), /* MEM_LOAD_UOPS_RETIRED.* */
INTEL_EVENT_CONSTRAINT(0xd2, 0xf), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */ INTEL_EVENT_CONSTRAINT(0xd2, 0xf), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
...@@ -431,10 +557,12 @@ struct event_constraint *intel_pebs_constraints(struct perf_event *event) ...@@ -431,10 +557,12 @@ struct event_constraint *intel_pebs_constraints(struct perf_event *event)
if (x86_pmu.pebs_constraints) { if (x86_pmu.pebs_constraints) {
for_each_event_constraint(c, x86_pmu.pebs_constraints) { for_each_event_constraint(c, x86_pmu.pebs_constraints) {
if ((event->hw.config & c->cmask) == c->code) if ((event->hw.config & c->cmask) == c->code) {
event->hw.flags |= c->flags;
return c; return c;
} }
} }
}
return &emptyconstraint; return &emptyconstraint;
} }
...@@ -447,6 +575,11 @@ void intel_pmu_pebs_enable(struct perf_event *event) ...@@ -447,6 +575,11 @@ void intel_pmu_pebs_enable(struct perf_event *event)
hwc->config &= ~ARCH_PERFMON_EVENTSEL_INT; hwc->config &= ~ARCH_PERFMON_EVENTSEL_INT;
cpuc->pebs_enabled |= 1ULL << hwc->idx; cpuc->pebs_enabled |= 1ULL << hwc->idx;
if (event->hw.flags & PERF_X86_EVENT_PEBS_LDLAT)
cpuc->pebs_enabled |= 1ULL << (hwc->idx + 32);
else if (event->hw.flags & PERF_X86_EVENT_PEBS_ST)
cpuc->pebs_enabled |= 1ULL << 63;
} }
void intel_pmu_pebs_disable(struct perf_event *event) void intel_pmu_pebs_disable(struct perf_event *event)
...@@ -559,20 +692,51 @@ static void __intel_pmu_pebs_event(struct perf_event *event, ...@@ -559,20 +692,51 @@ static void __intel_pmu_pebs_event(struct perf_event *event,
struct pt_regs *iregs, void *__pebs) struct pt_regs *iregs, void *__pebs)
{ {
/* /*
* We cast to pebs_record_core since that is a subset of * We cast to pebs_record_nhm to get the load latency data
* both formats and we don't use the other fields in this * if extra_reg MSR_PEBS_LD_LAT_THRESHOLD used
* routine.
*/ */
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
struct pebs_record_core *pebs = __pebs; struct pebs_record_nhm *pebs = __pebs;
struct perf_sample_data data; struct perf_sample_data data;
struct pt_regs regs; struct pt_regs regs;
u64 sample_type;
int fll, fst;
if (!intel_pmu_save_and_restart(event)) if (!intel_pmu_save_and_restart(event))
return; return;
fll = event->hw.flags & PERF_X86_EVENT_PEBS_LDLAT;
fst = event->hw.flags & PERF_X86_EVENT_PEBS_ST;
perf_sample_data_init(&data, 0, event->hw.last_period); perf_sample_data_init(&data, 0, event->hw.last_period);
data.period = event->hw.last_period;
sample_type = event->attr.sample_type;
/*
* if PEBS-LL or PreciseStore
*/
if (fll || fst) {
if (sample_type & PERF_SAMPLE_ADDR)
data.addr = pebs->dla;
/*
* Use latency for weight (only avail with PEBS-LL)
*/
if (fll && (sample_type & PERF_SAMPLE_WEIGHT))
data.weight = pebs->lat;
/*
* data.data_src encodes the data source
*/
if (sample_type & PERF_SAMPLE_DATA_SRC) {
if (fll)
data.data_src.val = load_latency_data(pebs->dse);
else
data.data_src.val = precise_store_data(pebs->dse);
}
}
/* /*
* We use the interrupt regs as a base because the PEBS record * We use the interrupt regs as a base because the PEBS record
* does not contain a full regs set, specifically it seems to * does not contain a full regs set, specifically it seems to
......
...@@ -76,7 +76,7 @@ ...@@ -76,7 +76,7 @@
#define SNBEP_PMON_CTL_UMASK_MASK 0x0000ff00 #define SNBEP_PMON_CTL_UMASK_MASK 0x0000ff00
#define SNBEP_PMON_CTL_RST (1 << 17) #define SNBEP_PMON_CTL_RST (1 << 17)
#define SNBEP_PMON_CTL_EDGE_DET (1 << 18) #define SNBEP_PMON_CTL_EDGE_DET (1 << 18)
#define SNBEP_PMON_CTL_EV_SEL_EXT (1 << 21) /* only for QPI */ #define SNBEP_PMON_CTL_EV_SEL_EXT (1 << 21)
#define SNBEP_PMON_CTL_EN (1 << 22) #define SNBEP_PMON_CTL_EN (1 << 22)
#define SNBEP_PMON_CTL_INVERT (1 << 23) #define SNBEP_PMON_CTL_INVERT (1 << 23)
#define SNBEP_PMON_CTL_TRESH_MASK 0xff000000 #define SNBEP_PMON_CTL_TRESH_MASK 0xff000000
...@@ -148,9 +148,20 @@ ...@@ -148,9 +148,20 @@
#define SNBEP_C0_MSR_PMON_CTL0 0xd10 #define SNBEP_C0_MSR_PMON_CTL0 0xd10
#define SNBEP_C0_MSR_PMON_BOX_CTL 0xd04 #define SNBEP_C0_MSR_PMON_BOX_CTL 0xd04
#define SNBEP_C0_MSR_PMON_BOX_FILTER 0xd14 #define SNBEP_C0_MSR_PMON_BOX_FILTER 0xd14
#define SNBEP_CB0_MSR_PMON_BOX_FILTER_MASK 0xfffffc1f
#define SNBEP_CBO_MSR_OFFSET 0x20 #define SNBEP_CBO_MSR_OFFSET 0x20
#define SNBEP_CB0_MSR_PMON_BOX_FILTER_TID 0x1f
#define SNBEP_CB0_MSR_PMON_BOX_FILTER_NID 0x3fc00
#define SNBEP_CB0_MSR_PMON_BOX_FILTER_STATE 0x7c0000
#define SNBEP_CB0_MSR_PMON_BOX_FILTER_OPC 0xff800000
#define SNBEP_CBO_EVENT_EXTRA_REG(e, m, i) { \
.event = (e), \
.msr = SNBEP_C0_MSR_PMON_BOX_FILTER, \
.config_mask = (m), \
.idx = (i) \
}
/* SNB-EP PCU register */ /* SNB-EP PCU register */
#define SNBEP_PCU_MSR_PMON_CTR0 0xc36 #define SNBEP_PCU_MSR_PMON_CTR0 0xc36
#define SNBEP_PCU_MSR_PMON_CTL0 0xc30 #define SNBEP_PCU_MSR_PMON_CTL0 0xc30
...@@ -160,6 +171,55 @@ ...@@ -160,6 +171,55 @@
#define SNBEP_PCU_MSR_CORE_C3_CTR 0x3fc #define SNBEP_PCU_MSR_CORE_C3_CTR 0x3fc
#define SNBEP_PCU_MSR_CORE_C6_CTR 0x3fd #define SNBEP_PCU_MSR_CORE_C6_CTR 0x3fd
/* IVT event control */
#define IVT_PMON_BOX_CTL_INT (SNBEP_PMON_BOX_CTL_RST_CTRL | \
SNBEP_PMON_BOX_CTL_RST_CTRS)
#define IVT_PMON_RAW_EVENT_MASK (SNBEP_PMON_CTL_EV_SEL_MASK | \
SNBEP_PMON_CTL_UMASK_MASK | \
SNBEP_PMON_CTL_EDGE_DET | \
SNBEP_PMON_CTL_TRESH_MASK)
/* IVT Ubox */
#define IVT_U_MSR_PMON_GLOBAL_CTL 0xc00
#define IVT_U_PMON_GLOBAL_FRZ_ALL (1 << 31)
#define IVT_U_PMON_GLOBAL_UNFRZ_ALL (1 << 29)
#define IVT_U_MSR_PMON_RAW_EVENT_MASK \
(SNBEP_PMON_CTL_EV_SEL_MASK | \
SNBEP_PMON_CTL_UMASK_MASK | \
SNBEP_PMON_CTL_EDGE_DET | \
SNBEP_U_MSR_PMON_CTL_TRESH_MASK)
/* IVT Cbo */
#define IVT_CBO_MSR_PMON_RAW_EVENT_MASK (IVT_PMON_RAW_EVENT_MASK | \
SNBEP_CBO_PMON_CTL_TID_EN)
#define IVT_CB0_MSR_PMON_BOX_FILTER_TID (0x1fULL << 0)
#define IVT_CB0_MSR_PMON_BOX_FILTER_LINK (0xfULL << 5)
#define IVT_CB0_MSR_PMON_BOX_FILTER_STATE (0x3fULL << 17)
#define IVT_CB0_MSR_PMON_BOX_FILTER_NID (0xffffULL << 32)
#define IVT_CB0_MSR_PMON_BOX_FILTER_OPC (0x1ffULL << 52)
#define IVT_CB0_MSR_PMON_BOX_FILTER_C6 (0x1ULL << 61)
#define IVT_CB0_MSR_PMON_BOX_FILTER_NC (0x1ULL << 62)
#define IVT_CB0_MSR_PMON_BOX_FILTER_IOSC (0x1ULL << 63)
/* IVT home agent */
#define IVT_HA_PCI_PMON_CTL_Q_OCC_RST (1 << 16)
#define IVT_HA_PCI_PMON_RAW_EVENT_MASK \
(IVT_PMON_RAW_EVENT_MASK | \
IVT_HA_PCI_PMON_CTL_Q_OCC_RST)
/* IVT PCU */
#define IVT_PCU_MSR_PMON_RAW_EVENT_MASK \
(SNBEP_PMON_CTL_EV_SEL_MASK | \
SNBEP_PMON_CTL_EV_SEL_EXT | \
SNBEP_PCU_MSR_PMON_CTL_OCC_SEL_MASK | \
SNBEP_PMON_CTL_EDGE_DET | \
SNBEP_PCU_MSR_PMON_CTL_TRESH_MASK | \
SNBEP_PCU_MSR_PMON_CTL_OCC_INVERT | \
SNBEP_PCU_MSR_PMON_CTL_OCC_EDGE_DET)
/* IVT QPI */
#define IVT_QPI_PCI_PMON_RAW_EVENT_MASK \
(IVT_PMON_RAW_EVENT_MASK | \
SNBEP_PMON_CTL_EV_SEL_EXT)
/* NHM-EX event control */ /* NHM-EX event control */
#define NHMEX_PMON_CTL_EV_SEL_MASK 0x000000ff #define NHMEX_PMON_CTL_EV_SEL_MASK 0x000000ff
#define NHMEX_PMON_CTL_UMASK_MASK 0x0000ff00 #define NHMEX_PMON_CTL_UMASK_MASK 0x0000ff00
......
...@@ -895,8 +895,8 @@ static void p4_pmu_disable_pebs(void) ...@@ -895,8 +895,8 @@ static void p4_pmu_disable_pebs(void)
* So at moment let leave metrics turned on forever -- it's * So at moment let leave metrics turned on forever -- it's
* ok for now but need to be revisited! * ok for now but need to be revisited!
* *
* (void)wrmsrl_safe(MSR_IA32_PEBS_ENABLE, (u64)0); * (void)wrmsrl_safe(MSR_IA32_PEBS_ENABLE, 0);
* (void)wrmsrl_safe(MSR_P4_PEBS_MATRIX_VERT, (u64)0); * (void)wrmsrl_safe(MSR_P4_PEBS_MATRIX_VERT, 0);
*/ */
} }
...@@ -910,8 +910,7 @@ static inline void p4_pmu_disable_event(struct perf_event *event) ...@@ -910,8 +910,7 @@ static inline void p4_pmu_disable_event(struct perf_event *event)
* asserted again and again * asserted again and again
*/ */
(void)wrmsrl_safe(hwc->config_base, (void)wrmsrl_safe(hwc->config_base,
(u64)(p4_config_unpack_cccr(hwc->config)) & p4_config_unpack_cccr(hwc->config) & ~P4_CCCR_ENABLE & ~P4_CCCR_OVF & ~P4_CCCR_RESERVED);
~P4_CCCR_ENABLE & ~P4_CCCR_OVF & ~P4_CCCR_RESERVED);
} }
static void p4_pmu_disable_all(void) static void p4_pmu_disable_all(void)
...@@ -957,7 +956,7 @@ static void p4_pmu_enable_event(struct perf_event *event) ...@@ -957,7 +956,7 @@ static void p4_pmu_enable_event(struct perf_event *event)
u64 escr_addr, cccr; u64 escr_addr, cccr;
bind = &p4_event_bind_map[idx]; bind = &p4_event_bind_map[idx];
escr_addr = (u64)bind->escr_msr[thread]; escr_addr = bind->escr_msr[thread];
/* /*
* - we dont support cascaded counters yet * - we dont support cascaded counters yet
......
...@@ -353,7 +353,11 @@ int __kprobes __copy_instruction(u8 *dest, u8 *src) ...@@ -353,7 +353,11 @@ int __kprobes __copy_instruction(u8 *dest, u8 *src)
* have given. * have given.
*/ */
newdisp = (u8 *) src + (s64) insn.displacement.value - (u8 *) dest; newdisp = (u8 *) src + (s64) insn.displacement.value - (u8 *) dest;
BUG_ON((s64) (s32) newdisp != newdisp); /* Sanity check. */ if ((s64) (s32) newdisp != newdisp) {
pr_err("Kprobes error: new displacement does not fit into s32 (%llx)\n", newdisp);
pr_err("\tSrc: %p, Dest: %p, old disp: %x\n", src, dest, insn.displacement.value);
return 0;
}
disp = (u8 *) dest + insn_offset_displacement(&insn); disp = (u8 *) dest + insn_offset_displacement(&insn);
*(s32 *) disp = (s32) newdisp; *(s32 *) disp = (s32) newdisp;
} }
......
...@@ -697,3 +697,32 @@ bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs) ...@@ -697,3 +697,32 @@ bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs)
send_sig(SIGTRAP, current, 0); send_sig(SIGTRAP, current, 0);
return ret; return ret;
} }
unsigned long
arch_uretprobe_hijack_return_addr(unsigned long trampoline_vaddr, struct pt_regs *regs)
{
int rasize, ncopied;
unsigned long orig_ret_vaddr = 0; /* clear high bits for 32-bit apps */
rasize = is_ia32_task() ? 4 : 8;
ncopied = copy_from_user(&orig_ret_vaddr, (void __user *)regs->sp, rasize);
if (unlikely(ncopied))
return -1;
/* check whether address has been already hijacked */
if (orig_ret_vaddr == trampoline_vaddr)
return orig_ret_vaddr;
ncopied = copy_to_user((void __user *)regs->sp, &trampoline_vaddr, rasize);
if (likely(!ncopied))
return orig_ret_vaddr;
if (ncopied != rasize) {
pr_err("uprobe: return address clobbered: pid=%d, %%sp=%#lx, "
"%%ip=%#lx\n", current->pid, regs->sp, regs->ip);
force_sig_info(SIGSEGV, SEND_SIG_FORCED, current);
}
return -1;
}
...@@ -351,4 +351,10 @@ void ftrace_likely_update(struct ftrace_branch_data *f, int val, int expect); ...@@ -351,4 +351,10 @@ void ftrace_likely_update(struct ftrace_branch_data *f, int val, int expect);
*/ */
#define ACCESS_ONCE(x) (*(volatile typeof(x) *)&(x)) #define ACCESS_ONCE(x) (*(volatile typeof(x) *)&(x))
/* Ignore/forbid kprobes attach on very low level functions marked by this attribute: */
#ifdef CONFIG_KPROBES
# define __kprobes __attribute__((__section__(".kprobes.text")))
#else
# define __kprobes
#endif
#endif /* __LINUX_COMPILER_H */ #endif /* __LINUX_COMPILER_H */
...@@ -29,6 +29,7 @@ ...@@ -29,6 +29,7 @@
* <jkenisto@us.ibm.com> and Prasanna S Panchamukhi * <jkenisto@us.ibm.com> and Prasanna S Panchamukhi
* <prasanna@in.ibm.com> added function-return probes. * <prasanna@in.ibm.com> added function-return probes.
*/ */
#include <linux/compiler.h> /* for __kprobes */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/notifier.h> #include <linux/notifier.h>
...@@ -49,16 +50,11 @@ ...@@ -49,16 +50,11 @@
#define KPROBE_REENTER 0x00000004 #define KPROBE_REENTER 0x00000004
#define KPROBE_HIT_SSDONE 0x00000008 #define KPROBE_HIT_SSDONE 0x00000008
/* Attach to insert probes on any functions which should be ignored*/
#define __kprobes __attribute__((__section__(".kprobes.text")))
#else /* CONFIG_KPROBES */ #else /* CONFIG_KPROBES */
typedef int kprobe_opcode_t; typedef int kprobe_opcode_t;
struct arch_specific_insn { struct arch_specific_insn {
int dummy; int dummy;
}; };
#define __kprobes
#endif /* CONFIG_KPROBES */ #endif /* CONFIG_KPROBES */
struct kprobe; struct kprobe;
......
...@@ -21,7 +21,6 @@ ...@@ -21,7 +21,6 @@
*/ */
#ifdef CONFIG_PERF_EVENTS #ifdef CONFIG_PERF_EVENTS
# include <linux/cgroup.h>
# include <asm/perf_event.h> # include <asm/perf_event.h>
# include <asm/local64.h> # include <asm/local64.h>
#endif #endif
...@@ -128,6 +127,7 @@ struct hw_perf_event { ...@@ -128,6 +127,7 @@ struct hw_perf_event {
int event_base_rdpmc; int event_base_rdpmc;
int idx; int idx;
int last_cpu; int last_cpu;
int flags;
struct hw_perf_event_extra extra_reg; struct hw_perf_event_extra extra_reg;
struct hw_perf_event_extra branch_reg; struct hw_perf_event_extra branch_reg;
...@@ -299,22 +299,7 @@ struct swevent_hlist { ...@@ -299,22 +299,7 @@ struct swevent_hlist {
#define PERF_ATTACH_GROUP 0x02 #define PERF_ATTACH_GROUP 0x02
#define PERF_ATTACH_TASK 0x04 #define PERF_ATTACH_TASK 0x04
#ifdef CONFIG_CGROUP_PERF struct perf_cgroup;
/*
* perf_cgroup_info keeps track of time_enabled for a cgroup.
* This is a per-cpu dynamically allocated data structure.
*/
struct perf_cgroup_info {
u64 time;
u64 timestamp;
};
struct perf_cgroup {
struct cgroup_subsys_state css;
struct perf_cgroup_info *info; /* timing info, one per cpu */
};
#endif
struct ring_buffer; struct ring_buffer;
/** /**
...@@ -583,11 +568,13 @@ struct perf_sample_data { ...@@ -583,11 +568,13 @@ struct perf_sample_data {
u32 reserved; u32 reserved;
} cpu_entry; } cpu_entry;
u64 period; u64 period;
union perf_mem_data_src data_src;
struct perf_callchain_entry *callchain; struct perf_callchain_entry *callchain;
struct perf_raw_record *raw; struct perf_raw_record *raw;
struct perf_branch_stack *br_stack; struct perf_branch_stack *br_stack;
struct perf_regs_user regs_user; struct perf_regs_user regs_user;
u64 stack_user_size; u64 stack_user_size;
u64 weight;
}; };
static inline void perf_sample_data_init(struct perf_sample_data *data, static inline void perf_sample_data_init(struct perf_sample_data *data,
...@@ -601,6 +588,8 @@ static inline void perf_sample_data_init(struct perf_sample_data *data, ...@@ -601,6 +588,8 @@ static inline void perf_sample_data_init(struct perf_sample_data *data,
data->regs_user.abi = PERF_SAMPLE_REGS_ABI_NONE; data->regs_user.abi = PERF_SAMPLE_REGS_ABI_NONE;
data->regs_user.regs = NULL; data->regs_user.regs = NULL;
data->stack_user_size = 0; data->stack_user_size = 0;
data->weight = 0;
data->data_src.val = 0;
} }
extern void perf_output_sample(struct perf_output_handle *handle, extern void perf_output_sample(struct perf_output_handle *handle,
...@@ -831,6 +820,7 @@ do { \ ...@@ -831,6 +820,7 @@ do { \
struct perf_pmu_events_attr { struct perf_pmu_events_attr {
struct device_attribute attr; struct device_attribute attr;
u64 id; u64 id;
const char *event_str;
}; };
#define PMU_EVENT_ATTR(_name, _var, _id, _show) \ #define PMU_EVENT_ATTR(_name, _var, _id, _show) \
......
...@@ -38,6 +38,8 @@ struct inode; ...@@ -38,6 +38,8 @@ struct inode;
#define UPROBE_HANDLER_REMOVE 1 #define UPROBE_HANDLER_REMOVE 1
#define UPROBE_HANDLER_MASK 1 #define UPROBE_HANDLER_MASK 1
#define MAX_URETPROBE_DEPTH 64
enum uprobe_filter_ctx { enum uprobe_filter_ctx {
UPROBE_FILTER_REGISTER, UPROBE_FILTER_REGISTER,
UPROBE_FILTER_UNREGISTER, UPROBE_FILTER_UNREGISTER,
...@@ -46,6 +48,9 @@ enum uprobe_filter_ctx { ...@@ -46,6 +48,9 @@ enum uprobe_filter_ctx {
struct uprobe_consumer { struct uprobe_consumer {
int (*handler)(struct uprobe_consumer *self, struct pt_regs *regs); int (*handler)(struct uprobe_consumer *self, struct pt_regs *regs);
int (*ret_handler)(struct uprobe_consumer *self,
unsigned long func,
struct pt_regs *regs);
bool (*filter)(struct uprobe_consumer *self, bool (*filter)(struct uprobe_consumer *self,
enum uprobe_filter_ctx ctx, enum uprobe_filter_ctx ctx,
struct mm_struct *mm); struct mm_struct *mm);
...@@ -68,6 +73,8 @@ struct uprobe_task { ...@@ -68,6 +73,8 @@ struct uprobe_task {
enum uprobe_task_state state; enum uprobe_task_state state;
struct arch_uprobe_task autask; struct arch_uprobe_task autask;
struct return_instance *return_instances;
unsigned int depth;
struct uprobe *active_uprobe; struct uprobe *active_uprobe;
unsigned long xol_vaddr; unsigned long xol_vaddr;
...@@ -100,6 +107,7 @@ struct uprobes_state { ...@@ -100,6 +107,7 @@ struct uprobes_state {
extern int __weak set_swbp(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr); extern int __weak set_swbp(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr);
extern int __weak set_orig_insn(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr); extern int __weak set_orig_insn(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr);
extern bool __weak is_swbp_insn(uprobe_opcode_t *insn); extern bool __weak is_swbp_insn(uprobe_opcode_t *insn);
extern bool __weak is_trap_insn(uprobe_opcode_t *insn);
extern int uprobe_register(struct inode *inode, loff_t offset, struct uprobe_consumer *uc); extern int uprobe_register(struct inode *inode, loff_t offset, struct uprobe_consumer *uc);
extern int uprobe_apply(struct inode *inode, loff_t offset, struct uprobe_consumer *uc, bool); extern int uprobe_apply(struct inode *inode, loff_t offset, struct uprobe_consumer *uc, bool);
extern void uprobe_unregister(struct inode *inode, loff_t offset, struct uprobe_consumer *uc); extern void uprobe_unregister(struct inode *inode, loff_t offset, struct uprobe_consumer *uc);
......
...@@ -132,8 +132,10 @@ enum perf_event_sample_format { ...@@ -132,8 +132,10 @@ enum perf_event_sample_format {
PERF_SAMPLE_BRANCH_STACK = 1U << 11, PERF_SAMPLE_BRANCH_STACK = 1U << 11,
PERF_SAMPLE_REGS_USER = 1U << 12, PERF_SAMPLE_REGS_USER = 1U << 12,
PERF_SAMPLE_STACK_USER = 1U << 13, PERF_SAMPLE_STACK_USER = 1U << 13,
PERF_SAMPLE_WEIGHT = 1U << 14,
PERF_SAMPLE_DATA_SRC = 1U << 15,
PERF_SAMPLE_MAX = 1U << 14, /* non-ABI */ PERF_SAMPLE_MAX = 1U << 16, /* non-ABI */
}; };
/* /*
...@@ -443,6 +445,7 @@ struct perf_event_mmap_page { ...@@ -443,6 +445,7 @@ struct perf_event_mmap_page {
#define PERF_RECORD_MISC_GUEST_KERNEL (4 << 0) #define PERF_RECORD_MISC_GUEST_KERNEL (4 << 0)
#define PERF_RECORD_MISC_GUEST_USER (5 << 0) #define PERF_RECORD_MISC_GUEST_USER (5 << 0)
#define PERF_RECORD_MISC_MMAP_DATA (1 << 13)
/* /*
* Indicates that the content of PERF_SAMPLE_IP points to * Indicates that the content of PERF_SAMPLE_IP points to
* the actual instruction that triggered the event. See also * the actual instruction that triggered the event. See also
...@@ -588,6 +591,9 @@ enum perf_event_type { ...@@ -588,6 +591,9 @@ enum perf_event_type {
* { u64 size; * { u64 size;
* char data[size]; * char data[size];
* u64 dyn_size; } && PERF_SAMPLE_STACK_USER * u64 dyn_size; } && PERF_SAMPLE_STACK_USER
*
* { u64 weight; } && PERF_SAMPLE_WEIGHT
* { u64 data_src; } && PERF_SAMPLE_DATA_SRC
* }; * };
*/ */
PERF_RECORD_SAMPLE = 9, PERF_RECORD_SAMPLE = 9,
...@@ -613,4 +619,67 @@ enum perf_callchain_context { ...@@ -613,4 +619,67 @@ enum perf_callchain_context {
#define PERF_FLAG_FD_OUTPUT (1U << 1) #define PERF_FLAG_FD_OUTPUT (1U << 1)
#define PERF_FLAG_PID_CGROUP (1U << 2) /* pid=cgroup id, per-cpu mode only */ #define PERF_FLAG_PID_CGROUP (1U << 2) /* pid=cgroup id, per-cpu mode only */
union perf_mem_data_src {
__u64 val;
struct {
__u64 mem_op:5, /* type of opcode */
mem_lvl:14, /* memory hierarchy level */
mem_snoop:5, /* snoop mode */
mem_lock:2, /* lock instr */
mem_dtlb:7, /* tlb access */
mem_rsvd:31;
};
};
/* type of opcode (load/store/prefetch,code) */
#define PERF_MEM_OP_NA 0x01 /* not available */
#define PERF_MEM_OP_LOAD 0x02 /* load instruction */
#define PERF_MEM_OP_STORE 0x04 /* store instruction */
#define PERF_MEM_OP_PFETCH 0x08 /* prefetch */
#define PERF_MEM_OP_EXEC 0x10 /* code (execution) */
#define PERF_MEM_OP_SHIFT 0
/* memory hierarchy (memory level, hit or miss) */
#define PERF_MEM_LVL_NA 0x01 /* not available */
#define PERF_MEM_LVL_HIT 0x02 /* hit level */
#define PERF_MEM_LVL_MISS 0x04 /* miss level */
#define PERF_MEM_LVL_L1 0x08 /* L1 */
#define PERF_MEM_LVL_LFB 0x10 /* Line Fill Buffer */
#define PERF_MEM_LVL_L2 0x20 /* L2 */
#define PERF_MEM_LVL_L3 0x40 /* L3 */
#define PERF_MEM_LVL_LOC_RAM 0x80 /* Local DRAM */
#define PERF_MEM_LVL_REM_RAM1 0x100 /* Remote DRAM (1 hop) */
#define PERF_MEM_LVL_REM_RAM2 0x200 /* Remote DRAM (2 hops) */
#define PERF_MEM_LVL_REM_CCE1 0x400 /* Remote Cache (1 hop) */
#define PERF_MEM_LVL_REM_CCE2 0x800 /* Remote Cache (2 hops) */
#define PERF_MEM_LVL_IO 0x1000 /* I/O memory */
#define PERF_MEM_LVL_UNC 0x2000 /* Uncached memory */
#define PERF_MEM_LVL_SHIFT 5
/* snoop mode */
#define PERF_MEM_SNOOP_NA 0x01 /* not available */
#define PERF_MEM_SNOOP_NONE 0x02 /* no snoop */
#define PERF_MEM_SNOOP_HIT 0x04 /* snoop hit */
#define PERF_MEM_SNOOP_MISS 0x08 /* snoop miss */
#define PERF_MEM_SNOOP_HITM 0x10 /* snoop hit modified */
#define PERF_MEM_SNOOP_SHIFT 19
/* locked instruction */
#define PERF_MEM_LOCK_NA 0x01 /* not available */
#define PERF_MEM_LOCK_LOCKED 0x02 /* locked transaction */
#define PERF_MEM_LOCK_SHIFT 24
/* TLB access */
#define PERF_MEM_TLB_NA 0x01 /* not available */
#define PERF_MEM_TLB_HIT 0x02 /* hit level */
#define PERF_MEM_TLB_MISS 0x04 /* miss level */
#define PERF_MEM_TLB_L1 0x08 /* L1 */
#define PERF_MEM_TLB_L2 0x10 /* L2 */
#define PERF_MEM_TLB_WK 0x20 /* Hardware Walker*/
#define PERF_MEM_TLB_OS 0x40 /* OS fault handler */
#define PERF_MEM_TLB_SHIFT 26
#define PERF_MEM_S(a, s) \
(((u64)PERF_MEM_##a##_##s) << PERF_MEM_##a##_SHIFT)
#endif /* _UAPI_LINUX_PERF_EVENT_H */ #endif /* _UAPI_LINUX_PERF_EVENT_H */
...@@ -37,6 +37,7 @@ ...@@ -37,6 +37,7 @@
#include <linux/ftrace_event.h> #include <linux/ftrace_event.h>
#include <linux/hw_breakpoint.h> #include <linux/hw_breakpoint.h>
#include <linux/mm_types.h> #include <linux/mm_types.h>
#include <linux/cgroup.h>
#include "internal.h" #include "internal.h"
...@@ -233,6 +234,20 @@ static void perf_ctx_unlock(struct perf_cpu_context *cpuctx, ...@@ -233,6 +234,20 @@ static void perf_ctx_unlock(struct perf_cpu_context *cpuctx,
#ifdef CONFIG_CGROUP_PERF #ifdef CONFIG_CGROUP_PERF
/*
* perf_cgroup_info keeps track of time_enabled for a cgroup.
* This is a per-cpu dynamically allocated data structure.
*/
struct perf_cgroup_info {
u64 time;
u64 timestamp;
};
struct perf_cgroup {
struct cgroup_subsys_state css;
struct perf_cgroup_info __percpu *info;
};
/* /*
* Must ensure cgroup is pinned (css_get) before calling * Must ensure cgroup is pinned (css_get) before calling
* this function. In other words, we cannot call this function * this function. In other words, we cannot call this function
...@@ -976,9 +991,15 @@ static void perf_event__header_size(struct perf_event *event) ...@@ -976,9 +991,15 @@ static void perf_event__header_size(struct perf_event *event)
if (sample_type & PERF_SAMPLE_PERIOD) if (sample_type & PERF_SAMPLE_PERIOD)
size += sizeof(data->period); size += sizeof(data->period);
if (sample_type & PERF_SAMPLE_WEIGHT)
size += sizeof(data->weight);
if (sample_type & PERF_SAMPLE_READ) if (sample_type & PERF_SAMPLE_READ)
size += event->read_size; size += event->read_size;
if (sample_type & PERF_SAMPLE_DATA_SRC)
size += sizeof(data->data_src.val);
event->header_size = size; event->header_size = size;
} }
...@@ -4193,6 +4214,12 @@ void perf_output_sample(struct perf_output_handle *handle, ...@@ -4193,6 +4214,12 @@ void perf_output_sample(struct perf_output_handle *handle,
perf_output_sample_ustack(handle, perf_output_sample_ustack(handle,
data->stack_user_size, data->stack_user_size,
data->regs_user.regs); data->regs_user.regs);
if (sample_type & PERF_SAMPLE_WEIGHT)
perf_output_put(handle, data->weight);
if (sample_type & PERF_SAMPLE_DATA_SRC)
perf_output_put(handle, data->data_src.val);
} }
void perf_prepare_sample(struct perf_event_header *header, void perf_prepare_sample(struct perf_event_header *header,
...@@ -4782,6 +4809,9 @@ static void perf_event_mmap_event(struct perf_mmap_event *mmap_event) ...@@ -4782,6 +4809,9 @@ static void perf_event_mmap_event(struct perf_mmap_event *mmap_event)
mmap_event->file_name = name; mmap_event->file_name = name;
mmap_event->file_size = size; mmap_event->file_size = size;
if (!(vma->vm_flags & VM_EXEC))
mmap_event->event_id.header.misc |= PERF_RECORD_MISC_MMAP_DATA;
mmap_event->event_id.header.size = sizeof(mmap_event->event_id) + size; mmap_event->event_id.header.size = sizeof(mmap_event->event_id) + size;
rcu_read_lock(); rcu_read_lock();
......
This diff is collapsed.
...@@ -109,11 +109,6 @@ struct kretprobe_trace_entry_head { ...@@ -109,11 +109,6 @@ struct kretprobe_trace_entry_head {
unsigned long ret_ip; unsigned long ret_ip;
}; };
struct uprobe_trace_entry_head {
struct trace_entry ent;
unsigned long ip;
};
/* /*
* trace_flag_type is an enumeration that holds different * trace_flag_type is an enumeration that holds different
* states when a trace occurs. These are: * states when a trace occurs. These are:
......
This diff is collapsed.
...@@ -517,6 +517,11 @@ int proc_dowatchdog(struct ctl_table *table, int write, ...@@ -517,6 +517,11 @@ int proc_dowatchdog(struct ctl_table *table, int write,
return ret; return ret;
set_sample_period(); set_sample_period();
/*
* Watchdog threads shouldn't be enabled if they are
* disabled. The 'watchdog_disabled' variable check in
* watchdog_*_all_cpus() function takes care of this.
*/
if (watchdog_enabled && watchdog_thresh) if (watchdog_enabled && watchdog_thresh)
watchdog_enable_all_cpus(); watchdog_enable_all_cpus();
else else
......
...@@ -34,7 +34,13 @@ help: ...@@ -34,7 +34,13 @@ help:
cpupower: FORCE cpupower: FORCE
$(call descend,power/$@) $(call descend,power/$@)
cgroup firewire lguest perf usb virtio vm: FORCE cgroup firewire guest usb virtio vm: FORCE
$(call descend,$@)
liblk: FORCE
$(call descend,lib/lk)
perf: liblk FORCE
$(call descend,$@) $(call descend,$@)
selftests: FORCE selftests: FORCE
...@@ -62,7 +68,13 @@ install: cgroup_install cpupower_install firewire_install lguest_install \ ...@@ -62,7 +68,13 @@ install: cgroup_install cpupower_install firewire_install lguest_install \
cpupower_clean: cpupower_clean:
$(call descend,power/cpupower,clean) $(call descend,power/cpupower,clean)
cgroup_clean firewire_clean lguest_clean perf_clean usb_clean virtio_clean vm_clean: cgroup_clean firewire_clean lguest_clean usb_clean virtio_clean vm_clean:
$(call descend,$(@:_clean=),clean)
liblk_clean:
$(call descend,lib/lk,clean)
perf_clean: liblk_clean
$(call descend,$(@:_clean=),clean) $(call descend,$(@:_clean=),clean)
selftests_clean: selftests_clean:
......
include ../../scripts/Makefile.include
# guard against environment variables
LIB_H=
LIB_OBJS=
LIB_H += debugfs.h
LIB_OBJS += $(OUTPUT)debugfs.o
LIBFILE = liblk.a
CFLAGS = -ggdb3 -Wall -Wextra -std=gnu99 -Werror -O6 -D_FORTIFY_SOURCE=2 $(EXTRA_WARNINGS) $(EXTRA_CFLAGS) -fPIC
EXTLIBS = -lpthread -lrt -lelf -lm
ALL_CFLAGS = $(CFLAGS) $(BASIC_CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64
ALL_LDFLAGS = $(LDFLAGS)
RM = rm -f
$(LIBFILE): $(LIB_OBJS)
$(QUIET_AR)$(RM) $@ && $(AR) rcs $(OUTPUT)$@ $(LIB_OBJS)
$(LIB_OBJS): $(LIB_H)
$(OUTPUT)%.o: %.c
$(QUIET_CC)$(CC) -o $@ -c $(ALL_CFLAGS) $<
$(OUTPUT)%.s: %.c
$(QUIET_CC)$(CC) -S $(ALL_CFLAGS) $<
$(OUTPUT)%.o: %.S
$(QUIET_CC)$(CC) -o $@ -c $(ALL_CFLAGS) $<
clean:
$(RM) $(LIB_OBJS) $(LIBFILE)
.PHONY: clean
#include "util.h" #include <errno.h>
#include "debugfs.h" #include <stdio.h>
#include "cache.h" #include <stdlib.h>
#include <string.h>
#include <linux/kernel.h> #include <stdbool.h>
#include <sys/vfs.h>
#include <sys/mount.h> #include <sys/mount.h>
#include <linux/magic.h>
#include <linux/kernel.h>
#include "debugfs.h"
static int debugfs_premounted;
char debugfs_mountpoint[PATH_MAX + 1] = "/sys/kernel/debug"; char debugfs_mountpoint[PATH_MAX + 1] = "/sys/kernel/debug";
char tracing_events_path[PATH_MAX + 1] = "/sys/kernel/debug/tracing/events";
static const char *debugfs_known_mountpoints[] = { static const char * const debugfs_known_mountpoints[] = {
"/sys/kernel/debug/", "/sys/kernel/debug/",
"/debug/", "/debug/",
0, 0,
}; };
static int debugfs_found; static bool debugfs_found;
/* find the path to the mounted debugfs */ /* find the path to the mounted debugfs */
const char *debugfs_find_mountpoint(void) const char *debugfs_find_mountpoint(void)
{ {
const char **ptr; const char * const *ptr;
char type[100]; char type[100];
FILE *fp; FILE *fp;
if (debugfs_found) if (debugfs_found)
return (const char *) debugfs_mountpoint; return (const char *)debugfs_mountpoint;
ptr = debugfs_known_mountpoints; ptr = debugfs_known_mountpoints;
while (*ptr) { while (*ptr) {
if (debugfs_valid_mountpoint(*ptr) == 0) { if (debugfs_valid_mountpoint(*ptr) == 0) {
debugfs_found = 1; debugfs_found = true;
strcpy(debugfs_mountpoint, *ptr); strcpy(debugfs_mountpoint, *ptr);
return debugfs_mountpoint; return debugfs_mountpoint;
} }
...@@ -52,7 +55,7 @@ const char *debugfs_find_mountpoint(void) ...@@ -52,7 +55,7 @@ const char *debugfs_find_mountpoint(void)
if (strcmp(type, "debugfs") != 0) if (strcmp(type, "debugfs") != 0)
return NULL; return NULL;
debugfs_found = 1; debugfs_found = true;
return debugfs_mountpoint; return debugfs_mountpoint;
} }
...@@ -71,21 +74,12 @@ int debugfs_valid_mountpoint(const char *debugfs) ...@@ -71,21 +74,12 @@ int debugfs_valid_mountpoint(const char *debugfs)
return 0; return 0;
} }
static void debugfs_set_tracing_events_path(const char *mountpoint)
{
snprintf(tracing_events_path, sizeof(tracing_events_path), "%s/%s",
mountpoint, "tracing/events");
}
/* mount the debugfs somewhere if it's not mounted */ /* mount the debugfs somewhere if it's not mounted */
char *debugfs_mount(const char *mountpoint) char *debugfs_mount(const char *mountpoint)
{ {
/* see if it's already mounted */ /* see if it's already mounted */
if (debugfs_find_mountpoint()) { if (debugfs_find_mountpoint())
debugfs_premounted = 1;
goto out; goto out;
}
/* if not mounted and no argument */ /* if not mounted and no argument */
if (mountpoint == NULL) { if (mountpoint == NULL) {
...@@ -100,15 +94,8 @@ char *debugfs_mount(const char *mountpoint) ...@@ -100,15 +94,8 @@ char *debugfs_mount(const char *mountpoint)
return NULL; return NULL;
/* save the mountpoint */ /* save the mountpoint */
debugfs_found = 1; debugfs_found = true;
strncpy(debugfs_mountpoint, mountpoint, sizeof(debugfs_mountpoint)); strncpy(debugfs_mountpoint, mountpoint, sizeof(debugfs_mountpoint));
out: out:
debugfs_set_tracing_events_path(debugfs_mountpoint);
return debugfs_mountpoint; return debugfs_mountpoint;
} }
void debugfs_set_path(const char *mountpoint)
{
snprintf(debugfs_mountpoint, sizeof(debugfs_mountpoint), "%s", mountpoint);
debugfs_set_tracing_events_path(mountpoint);
}
#ifndef __LK_DEBUGFS_H__
#define __LK_DEBUGFS_H__
#define _STR(x) #x
#define STR(x) _STR(x)
/*
* On most systems <limits.h> would have given us this, but not on some systems
* (e.g. GNU/Hurd).
*/
#ifndef PATH_MAX
#define PATH_MAX 4096
#endif
#ifndef DEBUGFS_MAGIC
#define DEBUGFS_MAGIC 0x64626720
#endif
#ifndef PERF_DEBUGFS_ENVIRONMENT
#define PERF_DEBUGFS_ENVIRONMENT "PERF_DEBUGFS_DIR"
#endif
const char *debugfs_find_mountpoint(void);
int debugfs_valid_mountpoint(const char *debugfs);
char *debugfs_mount(const char *mountpoint);
extern char debugfs_mountpoint[];
#endif /* __LK_DEBUGFS_H__ */
...@@ -93,6 +93,9 @@ OPTIONS ...@@ -93,6 +93,9 @@ OPTIONS
--skip-missing:: --skip-missing::
Skip symbols that cannot be annotated. Skip symbols that cannot be annotated.
--group::
Show event group information together
SEE ALSO SEE ALSO
-------- --------
linkperf:perf-record[1], linkperf:perf-report[1] linkperf:perf-record[1], linkperf:perf-report[1]
perf-mem(1)
===========
NAME
----
perf-mem - Profile memory accesses
SYNOPSIS
--------
[verse]
'perf mem' [<options>] (record [<command>] | report)
DESCRIPTION
-----------
"perf mem -t <TYPE> record" runs a command and gathers memory operation data
from it, into perf.data. Perf record options are accepted and are passed through.
"perf mem -t <TYPE> report" displays the result. It invokes perf report with the
right set of options to display a memory access profile.
OPTIONS
-------
<command>...::
Any command you can specify in a shell.
-t::
--type=::
Select the memory operation type: load or store (default: load)
-D::
--dump-raw-samples=::
Dump the raw decoded samples on the screen in a format that is easy to parse with
one sample per line.
-x::
--field-separator::
Specify the field separator used when dump raw samples (-D option). By default,
The separator is the space character.
-C::
--cpu-list::
Restrict dump of raw samples to those provided via this option. Note that the same
option can be passed in record mode. It will be interpreted the same way as perf
record.
SEE ALSO
--------
linkperf:perf-record[1], linkperf:perf-report[1]
...@@ -182,6 +182,12 @@ is enabled for all the sampling events. The sampled branch type is the same for ...@@ -182,6 +182,12 @@ is enabled for all the sampling events. The sampled branch type is the same for
The various filters must be specified as a comma separated list: --branch-filter any_ret,u,k The various filters must be specified as a comma separated list: --branch-filter any_ret,u,k
Note that this feature may not be available on all processors. Note that this feature may not be available on all processors.
-W::
--weight::
Enable weightened sampling. An additional weight is recorded per sample and can be
displayed with the weight and local_weight sort keys. This currently works for TSX
abort events and some memory events in precise mode on modern Intel CPUs.
SEE ALSO SEE ALSO
-------- --------
linkperf:perf-stat[1], linkperf:perf-list[1] linkperf:perf-stat[1], linkperf:perf-list[1]
...@@ -59,7 +59,7 @@ OPTIONS ...@@ -59,7 +59,7 @@ OPTIONS
--sort=:: --sort=::
Sort histogram entries by given key(s) - multiple keys can be specified Sort histogram entries by given key(s) - multiple keys can be specified
in CSV format. Following sort keys are available: in CSV format. Following sort keys are available:
pid, comm, dso, symbol, parent, cpu, srcline. pid, comm, dso, symbol, parent, cpu, srcline, weight, local_weight.
Each key has following meaning: Each key has following meaning:
...@@ -206,6 +206,10 @@ OPTIONS ...@@ -206,6 +206,10 @@ OPTIONS
--group:: --group::
Show event group information together. Show event group information together.
--demangle::
Demangle symbol names to human readable form. It's enabled by default,
disable with --no-demangle.
SEE ALSO SEE ALSO
-------- --------
linkperf:perf-stat[1], linkperf:perf-annotate[1] linkperf:perf-stat[1], linkperf:perf-annotate[1]
...@@ -52,7 +52,7 @@ OPTIONS ...@@ -52,7 +52,7 @@ OPTIONS
-r:: -r::
--repeat=<n>:: --repeat=<n>::
repeat command and print average + stddev (max: 100) repeat command and print average + stddev (max: 100). 0 means forever.
-B:: -B::
--big-num:: --big-num::
...@@ -119,13 +119,19 @@ perf stat --repeat 10 --null --sync --pre 'make -s O=defconfig-build/clean' -- m ...@@ -119,13 +119,19 @@ perf stat --repeat 10 --null --sync --pre 'make -s O=defconfig-build/clean' -- m
Print count deltas every N milliseconds (minimum: 100ms) Print count deltas every N milliseconds (minimum: 100ms)
example: perf stat -I 1000 -e cycles -a sleep 5 example: perf stat -I 1000 -e cycles -a sleep 5
--aggr-socket:: --per-socket::
Aggregate counts per processor socket for system-wide mode measurements. This Aggregate counts per processor socket for system-wide mode measurements. This
is a useful mode to detect imbalance between sockets. To enable this mode, is a useful mode to detect imbalance between sockets. To enable this mode,
use --aggr-socket in addition to -a. (system-wide). The output includes the use --per-socket in addition to -a. (system-wide). The output includes the
socket number and the number of online processors on that socket. This is socket number and the number of online processors on that socket. This is
useful to gauge the amount of aggregation. useful to gauge the amount of aggregation.
--per-core::
Aggregate counts per physical processor for system-wide mode measurements. This
is a useful mode to detect imbalance between physical cores. To enable this mode,
use --per-core in addition to -a. (system-wide). The output includes the
core number and the number of online logical processors on that physical processor.
EXAMPLES EXAMPLES
-------- --------
......
...@@ -112,7 +112,7 @@ Default is to monitor all CPUS. ...@@ -112,7 +112,7 @@ Default is to monitor all CPUS.
-s:: -s::
--sort:: --sort::
Sort by key(s): pid, comm, dso, symbol, parent, srcline. Sort by key(s): pid, comm, dso, symbol, parent, srcline, weight, local_weight.
-n:: -n::
--show-nr-samples:: --show-nr-samples::
......
tools/perf tools/perf
tools/scripts tools/scripts
tools/lib/traceevent tools/lib/traceevent
tools/lib/lk
include/linux/const.h include/linux/const.h
include/linux/perf_event.h include/linux/perf_event.h
include/linux/rbtree.h include/linux/rbtree.h
......
...@@ -35,7 +35,9 @@ include config/utilities.mak ...@@ -35,7 +35,9 @@ include config/utilities.mak
# #
# Define WERROR=0 to disable treating any warnings as errors. # Define WERROR=0 to disable treating any warnings as errors.
# #
# Define NO_NEWT if you do not want TUI support. # Define NO_NEWT if you do not want TUI support. (deprecated)
#
# Define NO_SLANG if you do not want TUI support.
# #
# Define NO_GTK2 if you do not want GTK+ GUI support. # Define NO_GTK2 if you do not want GTK+ GUI support.
# #
...@@ -104,6 +106,10 @@ ifdef PARSER_DEBUG ...@@ -104,6 +106,10 @@ ifdef PARSER_DEBUG
PARSER_DEBUG_CFLAGS := -DPARSER_DEBUG PARSER_DEBUG_CFLAGS := -DPARSER_DEBUG
endif endif
ifdef NO_NEWT
NO_SLANG=1
endif
CFLAGS = -fno-omit-frame-pointer -ggdb3 -funwind-tables -Wall -Wextra -std=gnu99 $(CFLAGS_WERROR) $(CFLAGS_OPTIMIZE) $(EXTRA_WARNINGS) $(EXTRA_CFLAGS) $(PARSER_DEBUG_CFLAGS) CFLAGS = -fno-omit-frame-pointer -ggdb3 -funwind-tables -Wall -Wextra -std=gnu99 $(CFLAGS_WERROR) $(CFLAGS_OPTIMIZE) $(EXTRA_WARNINGS) $(EXTRA_CFLAGS) $(PARSER_DEBUG_CFLAGS)
EXTLIBS = -lpthread -lrt -lelf -lm EXTLIBS = -lpthread -lrt -lelf -lm
ALL_CFLAGS = $(CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE ALL_CFLAGS = $(CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE
...@@ -215,6 +221,7 @@ BASIC_CFLAGS = \ ...@@ -215,6 +221,7 @@ BASIC_CFLAGS = \
-Iutil \ -Iutil \
-I. \ -I. \
-I$(TRACE_EVENT_DIR) \ -I$(TRACE_EVENT_DIR) \
-I../lib/ \
-D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE
BASIC_LDFLAGS = BASIC_LDFLAGS =
...@@ -240,19 +247,28 @@ SCRIPT_SH += perf-archive.sh ...@@ -240,19 +247,28 @@ SCRIPT_SH += perf-archive.sh
grep-libs = $(filter -l%,$(1)) grep-libs = $(filter -l%,$(1))
strip-libs = $(filter-out -l%,$(1)) strip-libs = $(filter-out -l%,$(1))
LK_DIR = ../lib/lk/
TRACE_EVENT_DIR = ../lib/traceevent/ TRACE_EVENT_DIR = ../lib/traceevent/
LK_PATH=$(LK_DIR)
ifneq ($(OUTPUT),) ifneq ($(OUTPUT),)
TE_PATH=$(OUTPUT) TE_PATH=$(OUTPUT)
ifneq ($(subdir),)
LK_PATH=$(OUTPUT)$(LK_DIR)
else
LK_PATH=$(OUTPUT)
endif
else else
TE_PATH=$(TRACE_EVENT_DIR) TE_PATH=$(TRACE_EVENT_DIR)
endif endif
LIBTRACEEVENT = $(TE_PATH)libtraceevent.a LIBTRACEEVENT = $(TE_PATH)libtraceevent.a
TE_LIB := -L$(TE_PATH) -ltraceevent
export LIBTRACEEVENT export LIBTRACEEVENT
LIBLK = $(LK_PATH)liblk.a
export LIBLK
# python extension build directories # python extension build directories
PYTHON_EXTBUILD := $(OUTPUT)python_ext_build/ PYTHON_EXTBUILD := $(OUTPUT)python_ext_build/
PYTHON_EXTBUILD_LIB := $(PYTHON_EXTBUILD)lib/ PYTHON_EXTBUILD_LIB := $(PYTHON_EXTBUILD)lib/
...@@ -262,7 +278,7 @@ export PYTHON_EXTBUILD_LIB PYTHON_EXTBUILD_TMP ...@@ -262,7 +278,7 @@ export PYTHON_EXTBUILD_LIB PYTHON_EXTBUILD_TMP
python-clean := rm -rf $(PYTHON_EXTBUILD) $(OUTPUT)python/perf.so python-clean := rm -rf $(PYTHON_EXTBUILD) $(OUTPUT)python/perf.so
PYTHON_EXT_SRCS := $(shell grep -v ^\# util/python-ext-sources) PYTHON_EXT_SRCS := $(shell grep -v ^\# util/python-ext-sources)
PYTHON_EXT_DEPS := util/python-ext-sources util/setup.py PYTHON_EXT_DEPS := util/python-ext-sources util/setup.py $(LIBTRACEEVENT)
$(OUTPUT)python/perf.so: $(PYTHON_EXT_SRCS) $(PYTHON_EXT_DEPS) $(OUTPUT)python/perf.so: $(PYTHON_EXT_SRCS) $(PYTHON_EXT_DEPS)
$(QUIET_GEN)CFLAGS='$(BASIC_CFLAGS)' $(PYTHON_WORD) util/setup.py \ $(QUIET_GEN)CFLAGS='$(BASIC_CFLAGS)' $(PYTHON_WORD) util/setup.py \
...@@ -355,7 +371,6 @@ LIB_H += util/cache.h ...@@ -355,7 +371,6 @@ LIB_H += util/cache.h
LIB_H += util/callchain.h LIB_H += util/callchain.h
LIB_H += util/build-id.h LIB_H += util/build-id.h
LIB_H += util/debug.h LIB_H += util/debug.h
LIB_H += util/debugfs.h
LIB_H += util/sysfs.h LIB_H += util/sysfs.h
LIB_H += util/pmu.h LIB_H += util/pmu.h
LIB_H += util/event.h LIB_H += util/event.h
...@@ -416,7 +431,6 @@ LIB_OBJS += $(OUTPUT)util/annotate.o ...@@ -416,7 +431,6 @@ LIB_OBJS += $(OUTPUT)util/annotate.o
LIB_OBJS += $(OUTPUT)util/build-id.o LIB_OBJS += $(OUTPUT)util/build-id.o
LIB_OBJS += $(OUTPUT)util/config.o LIB_OBJS += $(OUTPUT)util/config.o
LIB_OBJS += $(OUTPUT)util/ctype.o LIB_OBJS += $(OUTPUT)util/ctype.o
LIB_OBJS += $(OUTPUT)util/debugfs.o
LIB_OBJS += $(OUTPUT)util/sysfs.o LIB_OBJS += $(OUTPUT)util/sysfs.o
LIB_OBJS += $(OUTPUT)util/pmu.o LIB_OBJS += $(OUTPUT)util/pmu.o
LIB_OBJS += $(OUTPUT)util/environment.o LIB_OBJS += $(OUTPUT)util/environment.o
...@@ -503,6 +517,10 @@ LIB_OBJS += $(OUTPUT)tests/evsel-tp-sched.o ...@@ -503,6 +517,10 @@ LIB_OBJS += $(OUTPUT)tests/evsel-tp-sched.o
LIB_OBJS += $(OUTPUT)tests/pmu.o LIB_OBJS += $(OUTPUT)tests/pmu.o
LIB_OBJS += $(OUTPUT)tests/hists_link.o LIB_OBJS += $(OUTPUT)tests/hists_link.o
LIB_OBJS += $(OUTPUT)tests/python-use.o LIB_OBJS += $(OUTPUT)tests/python-use.o
LIB_OBJS += $(OUTPUT)tests/bp_signal.o
LIB_OBJS += $(OUTPUT)tests/bp_signal_overflow.o
LIB_OBJS += $(OUTPUT)tests/task-exit.o
LIB_OBJS += $(OUTPUT)tests/sw-clock.o
BUILTIN_OBJS += $(OUTPUT)builtin-annotate.o BUILTIN_OBJS += $(OUTPUT)builtin-annotate.o
BUILTIN_OBJS += $(OUTPUT)builtin-bench.o BUILTIN_OBJS += $(OUTPUT)builtin-bench.o
...@@ -535,8 +553,9 @@ BUILTIN_OBJS += $(OUTPUT)builtin-lock.o ...@@ -535,8 +553,9 @@ BUILTIN_OBJS += $(OUTPUT)builtin-lock.o
BUILTIN_OBJS += $(OUTPUT)builtin-kvm.o BUILTIN_OBJS += $(OUTPUT)builtin-kvm.o
BUILTIN_OBJS += $(OUTPUT)builtin-inject.o BUILTIN_OBJS += $(OUTPUT)builtin-inject.o
BUILTIN_OBJS += $(OUTPUT)tests/builtin-test.o BUILTIN_OBJS += $(OUTPUT)tests/builtin-test.o
BUILTIN_OBJS += $(OUTPUT)builtin-mem.o
PERFLIBS = $(LIB_FILE) $(LIBTRACEEVENT) PERFLIBS = $(LIB_FILE) $(LIBLK) $(LIBTRACEEVENT)
# #
# Platform specific tweaks # Platform specific tweaks
...@@ -667,15 +686,15 @@ ifndef NO_LIBAUDIT ...@@ -667,15 +686,15 @@ ifndef NO_LIBAUDIT
endif endif
endif endif
ifndef NO_NEWT ifndef NO_SLANG
FLAGS_NEWT=$(ALL_CFLAGS) $(ALL_LDFLAGS) $(EXTLIBS) -lnewt FLAGS_SLANG=$(ALL_CFLAGS) $(ALL_LDFLAGS) $(EXTLIBS) -I/usr/include/slang -lslang
ifneq ($(call try-cc,$(SOURCE_NEWT),$(FLAGS_NEWT),libnewt),y) ifneq ($(call try-cc,$(SOURCE_SLANG),$(FLAGS_SLANG),libslang),y)
msg := $(warning newt not found, disables TUI support. Please install newt-devel or libnewt-dev); msg := $(warning slang not found, disables TUI support. Please install slang-devel or libslang-dev);
else else
# Fedora has /usr/include/slang/slang.h, but ubuntu /usr/include/slang.h # Fedora has /usr/include/slang/slang.h, but ubuntu /usr/include/slang.h
BASIC_CFLAGS += -I/usr/include/slang BASIC_CFLAGS += -I/usr/include/slang
BASIC_CFLAGS += -DNEWT_SUPPORT BASIC_CFLAGS += -DSLANG_SUPPORT
EXTLIBS += -lnewt -lslang EXTLIBS += -lslang
LIB_OBJS += $(OUTPUT)ui/browser.o LIB_OBJS += $(OUTPUT)ui/browser.o
LIB_OBJS += $(OUTPUT)ui/browsers/annotate.o LIB_OBJS += $(OUTPUT)ui/browsers/annotate.o
LIB_OBJS += $(OUTPUT)ui/browsers/hists.o LIB_OBJS += $(OUTPUT)ui/browsers/hists.o
...@@ -1051,6 +1070,18 @@ $(LIBTRACEEVENT): ...@@ -1051,6 +1070,18 @@ $(LIBTRACEEVENT):
$(LIBTRACEEVENT)-clean: $(LIBTRACEEVENT)-clean:
$(QUIET_SUBDIR0)$(TRACE_EVENT_DIR) $(QUIET_SUBDIR1) O=$(OUTPUT) clean $(QUIET_SUBDIR0)$(TRACE_EVENT_DIR) $(QUIET_SUBDIR1) O=$(OUTPUT) clean
# if subdir is set, we've been called from above so target has been built
# already
$(LIBLK):
ifeq ($(subdir),)
$(QUIET_SUBDIR0)$(LK_DIR) $(QUIET_SUBDIR1) O=$(OUTPUT) liblk.a
endif
$(LIBLK)-clean:
ifeq ($(subdir),)
$(QUIET_SUBDIR0)$(LK_DIR) $(QUIET_SUBDIR1) O=$(OUTPUT) clean
endif
help: help:
@echo 'Perf make targets:' @echo 'Perf make targets:'
@echo ' doc - make *all* documentation (see below)' @echo ' doc - make *all* documentation (see below)'
...@@ -1171,7 +1202,7 @@ $(INSTALL_DOC_TARGETS): ...@@ -1171,7 +1202,7 @@ $(INSTALL_DOC_TARGETS):
### Cleaning rules ### Cleaning rules
clean: $(LIBTRACEEVENT)-clean clean: $(LIBTRACEEVENT)-clean $(LIBLK)-clean
$(RM) $(LIB_OBJS) $(BUILTIN_OBJS) $(LIB_FILE) $(OUTPUT)perf-archive $(OUTPUT)perf.o $(LANG_BINDINGS) $(RM) $(LIB_OBJS) $(BUILTIN_OBJS) $(LIB_FILE) $(OUTPUT)perf-archive $(OUTPUT)perf.o $(LANG_BINDINGS)
$(RM) $(ALL_PROGRAMS) perf $(RM) $(ALL_PROGRAMS) perf
$(RM) *.spec *.pyc *.pyo */*.pyc */*.pyo $(OUTPUT)common-cmds.h TAGS tags cscope* $(RM) *.spec *.pyc *.pyo */*.pyc */*.pyo $(OUTPUT)common-cmds.h TAGS tags cscope*
...@@ -1181,6 +1212,6 @@ clean: $(LIBTRACEEVENT)-clean ...@@ -1181,6 +1212,6 @@ clean: $(LIBTRACEEVENT)-clean
$(RM) $(OUTPUT)util/*-flex* $(RM) $(OUTPUT)util/*-flex*
$(python-clean) $(python-clean)
.PHONY: all install clean strip $(LIBTRACEEVENT) .PHONY: all install clean strip $(LIBTRACEEVENT) $(LIBLK)
.PHONY: shell_compatibility_test please_set_SHELL_PATH_to_a_more_modern_shell .PHONY: shell_compatibility_test please_set_SHELL_PATH_to_a_more_modern_shell
.PHONY: .FORCE-PERF-VERSION-FILE TAGS tags cscope .FORCE-PERF-CFLAGS .PHONY: .FORCE-PERF-VERSION-FILE TAGS tags cscope .FORCE-PERF-CFLAGS
...@@ -8,10 +8,7 @@ ...@@ -8,10 +8,7 @@
* published by the Free Software Foundation. * published by the Free Software Foundation.
*/ */
#include <stdlib.h> #include <stddef.h>
#ifndef __UCLIBC__
#include <libio.h>
#endif
#include <dwarf-regs.h> #include <dwarf-regs.h>
struct pt_regs_dwarfnum { struct pt_regs_dwarfnum {
......
...@@ -9,10 +9,7 @@ ...@@ -9,10 +9,7 @@
* 2 of the License, or (at your option) any later version. * 2 of the License, or (at your option) any later version.
*/ */
#include <stdlib.h> #include <stddef.h>
#ifndef __UCLIBC__
#include <libio.h>
#endif
#include <dwarf-regs.h> #include <dwarf-regs.h>
......
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
* *
*/ */
#include <libio.h> #include <stddef.h>
#include <dwarf-regs.h> #include <dwarf-regs.h>
#define NUM_GPRS 16 #define NUM_GPRS 16
......
...@@ -19,7 +19,7 @@ ...@@ -19,7 +19,7 @@
* *
*/ */
#include <libio.h> #include <stddef.h>
#include <dwarf-regs.h> #include <dwarf-regs.h>
/* /*
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
* 2 of the License, or (at your option) any later version. * 2 of the License, or (at your option) any later version.
*/ */
#include <libio.h> #include <stddef.h>
#include <dwarf-regs.h> #include <dwarf-regs.h>
#define SPARC_MAX_REGS 96 #define SPARC_MAX_REGS 96
......
...@@ -20,7 +20,7 @@ ...@@ -20,7 +20,7 @@
* *
*/ */
#include <libio.h> #include <stddef.h>
#include <dwarf-regs.h> #include <dwarf-regs.h>
/* /*
......
...@@ -63,7 +63,7 @@ static int perf_evsel__add_sample(struct perf_evsel *evsel, ...@@ -63,7 +63,7 @@ static int perf_evsel__add_sample(struct perf_evsel *evsel,
return 0; return 0;
} }
he = __hists__add_entry(&evsel->hists, al, NULL, 1); he = __hists__add_entry(&evsel->hists, al, NULL, 1, 1);
if (he == NULL) if (he == NULL)
return -ENOMEM; return -ENOMEM;
...@@ -109,14 +109,16 @@ static int process_sample_event(struct perf_tool *tool, ...@@ -109,14 +109,16 @@ static int process_sample_event(struct perf_tool *tool,
return 0; return 0;
} }
static int hist_entry__tty_annotate(struct hist_entry *he, int evidx, static int hist_entry__tty_annotate(struct hist_entry *he,
struct perf_evsel *evsel,
struct perf_annotate *ann) struct perf_annotate *ann)
{ {
return symbol__tty_annotate(he->ms.sym, he->ms.map, evidx, return symbol__tty_annotate(he->ms.sym, he->ms.map, evsel,
ann->print_line, ann->full_paths, 0, 0); ann->print_line, ann->full_paths, 0, 0);
} }
static void hists__find_annotations(struct hists *self, int evidx, static void hists__find_annotations(struct hists *self,
struct perf_evsel *evsel,
struct perf_annotate *ann) struct perf_annotate *ann)
{ {
struct rb_node *nd = rb_first(&self->entries), *next; struct rb_node *nd = rb_first(&self->entries), *next;
...@@ -142,14 +144,14 @@ static void hists__find_annotations(struct hists *self, int evidx, ...@@ -142,14 +144,14 @@ static void hists__find_annotations(struct hists *self, int evidx,
if (use_browser == 2) { if (use_browser == 2) {
int ret; int ret;
ret = hist_entry__gtk_annotate(he, evidx, NULL); ret = hist_entry__gtk_annotate(he, evsel, NULL);
if (!ret || !ann->skip_missing) if (!ret || !ann->skip_missing)
return; return;
/* skip missing symbols */ /* skip missing symbols */
nd = rb_next(nd); nd = rb_next(nd);
} else if (use_browser == 1) { } else if (use_browser == 1) {
key = hist_entry__tui_annotate(he, evidx, NULL); key = hist_entry__tui_annotate(he, evsel, NULL);
switch (key) { switch (key) {
case -1: case -1:
if (!ann->skip_missing) if (!ann->skip_missing)
...@@ -168,7 +170,7 @@ static void hists__find_annotations(struct hists *self, int evidx, ...@@ -168,7 +170,7 @@ static void hists__find_annotations(struct hists *self, int evidx,
if (next != NULL) if (next != NULL)
nd = next; nd = next;
} else { } else {
hist_entry__tty_annotate(he, evidx, ann); hist_entry__tty_annotate(he, evsel, ann);
nd = rb_next(nd); nd = rb_next(nd);
/* /*
* Since we have a hist_entry per IP for the same * Since we have a hist_entry per IP for the same
...@@ -230,7 +232,12 @@ static int __cmd_annotate(struct perf_annotate *ann) ...@@ -230,7 +232,12 @@ static int __cmd_annotate(struct perf_annotate *ann)
total_nr_samples += nr_samples; total_nr_samples += nr_samples;
hists__collapse_resort(hists); hists__collapse_resort(hists);
hists__output_resort(hists); hists__output_resort(hists);
hists__find_annotations(hists, pos->idx, ann);
if (symbol_conf.event_group &&
!perf_evsel__is_group_leader(pos))
continue;
hists__find_annotations(hists, pos, ann);
} }
} }
...@@ -312,6 +319,8 @@ int cmd_annotate(int argc, const char **argv, const char *prefix __maybe_unused) ...@@ -312,6 +319,8 @@ int cmd_annotate(int argc, const char **argv, const char *prefix __maybe_unused)
"Specify disassembler style (e.g. -M intel for intel syntax)"), "Specify disassembler style (e.g. -M intel for intel syntax)"),
OPT_STRING(0, "objdump", &objdump_path, "path", OPT_STRING(0, "objdump", &objdump_path, "path",
"objdump binary to use for disassembly and annotations"), "objdump binary to use for disassembly and annotations"),
OPT_BOOLEAN(0, "group", &symbol_conf.event_group,
"Show event group information together"),
OPT_END() OPT_END()
}; };
......
...@@ -231,9 +231,10 @@ int perf_diff__formula(struct hist_entry *he, struct hist_entry *pair, ...@@ -231,9 +231,10 @@ int perf_diff__formula(struct hist_entry *he, struct hist_entry *pair,
} }
static int hists__add_entry(struct hists *self, static int hists__add_entry(struct hists *self,
struct addr_location *al, u64 period) struct addr_location *al, u64 period,
u64 weight)
{ {
if (__hists__add_entry(self, al, NULL, period) != NULL) if (__hists__add_entry(self, al, NULL, period, weight) != NULL)
return 0; return 0;
return -ENOMEM; return -ENOMEM;
} }
...@@ -255,7 +256,7 @@ static int diff__process_sample_event(struct perf_tool *tool __maybe_unused, ...@@ -255,7 +256,7 @@ static int diff__process_sample_event(struct perf_tool *tool __maybe_unused,
if (al.filtered) if (al.filtered)
return 0; return 0;
if (hists__add_entry(&evsel->hists, &al, sample->period)) { if (hists__add_entry(&evsel->hists, &al, sample->period, sample->weight)) {
pr_warning("problem incrementing symbol period, skipping event\n"); pr_warning("problem incrementing symbol period, skipping event\n");
return -1; return -1;
} }
......
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
#include "util/parse-options.h" #include "util/parse-options.h"
#include "util/trace-event.h" #include "util/trace-event.h"
#include "util/debug.h" #include "util/debug.h"
#include "util/debugfs.h" #include <lk/debugfs.h>
#include "util/tool.h" #include "util/tool.h"
#include "util/stat.h" #include "util/stat.h"
......
This diff is collapsed.
...@@ -37,7 +37,7 @@ ...@@ -37,7 +37,7 @@
#include "util/strfilter.h" #include "util/strfilter.h"
#include "util/symbol.h" #include "util/symbol.h"
#include "util/debug.h" #include "util/debug.h"
#include "util/debugfs.h" #include <lk/debugfs.h>
#include "util/parse-options.h" #include "util/parse-options.h"
#include "util/probe-finder.h" #include "util/probe-finder.h"
#include "util/probe-event.h" #include "util/probe-event.h"
......
...@@ -5,8 +5,6 @@ ...@@ -5,8 +5,6 @@
* (or a CPU, or a PID) into the perf.data output file - for * (or a CPU, or a PID) into the perf.data output file - for
* later analysis via perf report. * later analysis via perf report.
*/ */
#define _FILE_OFFSET_BITS 64
#include "builtin.h" #include "builtin.h"
#include "perf.h" #include "perf.h"
...@@ -474,7 +472,9 @@ static int __cmd_record(struct perf_record *rec, int argc, const char **argv) ...@@ -474,7 +472,9 @@ static int __cmd_record(struct perf_record *rec, int argc, const char **argv)
} }
if (forks) { if (forks) {
err = perf_evlist__prepare_workload(evsel_list, opts, argv); err = perf_evlist__prepare_workload(evsel_list, &opts->target,
argv, opts->pipe_output,
true);
if (err < 0) { if (err < 0) {
pr_err("Couldn't run the workload!\n"); pr_err("Couldn't run the workload!\n");
goto out_delete_session; goto out_delete_session;
...@@ -953,6 +953,8 @@ const struct option record_options[] = { ...@@ -953,6 +953,8 @@ const struct option record_options[] = {
OPT_CALLBACK('j', "branch-filter", &record.opts.branch_stack, OPT_CALLBACK('j', "branch-filter", &record.opts.branch_stack,
"branch filter mask", "branch stack filter modes", "branch filter mask", "branch stack filter modes",
parse_branch_stack), parse_branch_stack),
OPT_BOOLEAN('W', "weight", &record.opts.sample_weight,
"sample by weight (on special events only)"),
OPT_END() OPT_END()
}; };
...@@ -964,7 +966,7 @@ int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused) ...@@ -964,7 +966,7 @@ int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused)
struct perf_record *rec = &record; struct perf_record *rec = &record;
char errbuf[BUFSIZ]; char errbuf[BUFSIZ];
evsel_list = perf_evlist__new(NULL, NULL); evsel_list = perf_evlist__new();
if (evsel_list == NULL) if (evsel_list == NULL)
return -ENOMEM; return -ENOMEM;
...@@ -1026,7 +1028,7 @@ int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused) ...@@ -1026,7 +1028,7 @@ int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused)
ui__error("%s", errbuf); ui__error("%s", errbuf);
err = -saved_errno; err = -saved_errno;
goto out_free_fd; goto out_symbol_exit;
} }
err = -ENOMEM; err = -ENOMEM;
...@@ -1057,6 +1059,9 @@ int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused) ...@@ -1057,6 +1059,9 @@ int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused)
} }
err = __cmd_record(&record, argc, argv); err = __cmd_record(&record, argc, argv);
perf_evlist__munmap(evsel_list);
perf_evlist__close(evsel_list);
out_free_fd: out_free_fd:
perf_evlist__delete_maps(evsel_list); perf_evlist__delete_maps(evsel_list);
out_symbol_exit: out_symbol_exit:
......
This diff is collapsed.
...@@ -1671,7 +1671,6 @@ int cmd_sched(int argc, const char **argv, const char *prefix __maybe_unused) ...@@ -1671,7 +1671,6 @@ int cmd_sched(int argc, const char **argv, const char *prefix __maybe_unused)
.sample = perf_sched__process_tracepoint_sample, .sample = perf_sched__process_tracepoint_sample,
.comm = perf_event__process_comm, .comm = perf_event__process_comm,
.lost = perf_event__process_lost, .lost = perf_event__process_lost,
.exit = perf_event__process_exit,
.fork = perf_event__process_fork, .fork = perf_event__process_fork,
.ordered_samples = true, .ordered_samples = true,
}, },
......
This diff is collapsed.
...@@ -231,7 +231,7 @@ static void perf_top__show_details(struct perf_top *top) ...@@ -231,7 +231,7 @@ static void perf_top__show_details(struct perf_top *top)
printf("Showing %s for %s\n", perf_evsel__name(top->sym_evsel), symbol->name); printf("Showing %s for %s\n", perf_evsel__name(top->sym_evsel), symbol->name);
printf(" Events Pcnt (>=%d%%)\n", top->sym_pcnt_filter); printf(" Events Pcnt (>=%d%%)\n", top->sym_pcnt_filter);
more = symbol__annotate_printf(symbol, he->ms.map, top->sym_evsel->idx, more = symbol__annotate_printf(symbol, he->ms.map, top->sym_evsel,
0, top->sym_pcnt_filter, top->print_entries, 4); 0, top->sym_pcnt_filter, top->print_entries, 4);
if (top->zero) if (top->zero)
symbol__annotate_zero_histogram(symbol, top->sym_evsel->idx); symbol__annotate_zero_histogram(symbol, top->sym_evsel->idx);
...@@ -251,7 +251,8 @@ static struct hist_entry *perf_evsel__add_hist_entry(struct perf_evsel *evsel, ...@@ -251,7 +251,8 @@ static struct hist_entry *perf_evsel__add_hist_entry(struct perf_evsel *evsel,
{ {
struct hist_entry *he; struct hist_entry *he;
he = __hists__add_entry(&evsel->hists, al, NULL, sample->period); he = __hists__add_entry(&evsel->hists, al, NULL, sample->period,
sample->weight);
if (he == NULL) if (he == NULL)
return NULL; return NULL;
...@@ -1088,7 +1089,7 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused) ...@@ -1088,7 +1089,7 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused)
OPT_INCR('v', "verbose", &verbose, OPT_INCR('v', "verbose", &verbose,
"be more verbose (show counter open errors, etc)"), "be more verbose (show counter open errors, etc)"),
OPT_STRING('s', "sort", &sort_order, "key[,key2...]", OPT_STRING('s', "sort", &sort_order, "key[,key2...]",
"sort by key(s): pid, comm, dso, symbol, parent"), "sort by key(s): pid, comm, dso, symbol, parent, weight, local_weight"),
OPT_BOOLEAN('n', "show-nr-samples", &symbol_conf.show_nr_samples, OPT_BOOLEAN('n', "show-nr-samples", &symbol_conf.show_nr_samples,
"Show a column with the number of samples"), "Show a column with the number of samples"),
OPT_CALLBACK_DEFAULT('G', "call-graph", &top.record_opts, OPT_CALLBACK_DEFAULT('G', "call-graph", &top.record_opts,
...@@ -1116,7 +1117,7 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused) ...@@ -1116,7 +1117,7 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused)
NULL NULL
}; };
top.evlist = perf_evlist__new(NULL, NULL); top.evlist = perf_evlist__new();
if (top.evlist == NULL) if (top.evlist == NULL)
return -ENOMEM; return -ENOMEM;
......
This diff is collapsed.
...@@ -36,6 +36,7 @@ extern int cmd_kvm(int argc, const char **argv, const char *prefix); ...@@ -36,6 +36,7 @@ extern int cmd_kvm(int argc, const char **argv, const char *prefix);
extern int cmd_test(int argc, const char **argv, const char *prefix); extern int cmd_test(int argc, const char **argv, const char *prefix);
extern int cmd_trace(int argc, const char **argv, const char *prefix); extern int cmd_trace(int argc, const char **argv, const char *prefix);
extern int cmd_inject(int argc, const char **argv, const char *prefix); extern int cmd_inject(int argc, const char **argv, const char *prefix);
extern int cmd_mem(int argc, const char **argv, const char *prefix);
extern int find_scripts(char **scripts_array, char **scripts_path_array); extern int find_scripts(char **scripts_array, char **scripts_path_array);
#endif #endif
...@@ -10,17 +10,18 @@ perf-buildid-list mainporcelain common ...@@ -10,17 +10,18 @@ perf-buildid-list mainporcelain common
perf-diff mainporcelain common perf-diff mainporcelain common
perf-evlist mainporcelain common perf-evlist mainporcelain common
perf-inject mainporcelain common perf-inject mainporcelain common
perf-kmem mainporcelain common
perf-kvm mainporcelain common
perf-list mainporcelain common perf-list mainporcelain common
perf-sched mainporcelain common perf-lock mainporcelain common
perf-mem mainporcelain common
perf-probe mainporcelain full
perf-record mainporcelain common perf-record mainporcelain common
perf-report mainporcelain common perf-report mainporcelain common
perf-sched mainporcelain common
perf-script mainporcelain common
perf-stat mainporcelain common perf-stat mainporcelain common
perf-test mainporcelain common
perf-timechart mainporcelain common perf-timechart mainporcelain common
perf-top mainporcelain common perf-top mainporcelain common
perf-trace mainporcelain common perf-trace mainporcelain common
perf-script mainporcelain common
perf-probe mainporcelain full
perf-kmem mainporcelain common
perf-lock mainporcelain common
perf-kvm mainporcelain common
perf-test mainporcelain common
...@@ -61,15 +61,13 @@ int main(void) ...@@ -61,15 +61,13 @@ int main(void)
} }
endef endef
ifndef NO_NEWT ifndef NO_SLANG
define SOURCE_NEWT define SOURCE_SLANG
#include <newt.h> #include <slang.h>
int main(void) int main(void)
{ {
newtInit(); return SLsmg_init_smg();
newtCls();
return newtFinished();
} }
endef endef
endif endif
......
This diff is collapsed.
...@@ -218,6 +218,7 @@ struct perf_record_opts { ...@@ -218,6 +218,7 @@ struct perf_record_opts {
bool pipe_output; bool pipe_output;
bool raw_samples; bool raw_samples;
bool sample_address; bool sample_address;
bool sample_weight;
bool sample_time; bool sample_time;
bool period; bool period;
unsigned int freq; unsigned int freq;
......
This diff is collapsed.
This diff is collapsed.
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
fd=1 fd=1
group_fd=-1 group_fd=-1
flags=0 flags=0
cpu=*
type=0|1 type=0|1
size=96 size=96
config=0 config=0
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment