Commit 55b4ce61 authored by Ingo Molnar's avatar Ingo Molnar

Merge tag 'perf-core-for-mingo-4.17-20180305' of...

Merge tag 'perf-core-for-mingo-4.17-20180305' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core

Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:

- Be more robust when drawing arrows in the annotation TUI, avoiding a
  segfault when jump instructions have as a target addresses in functions
  other that the one currently being annotated. The full fix will come in
  the following days, when jumping to other functions will work as call
  instructions (Arnaldo Carvalho de Melo)

- Allow asking for the maximum allowed sample rate in 'top' and
  'record', i.e. 'perf record -F max' will read the
  kernel.perf_event_max_sample_rate sysctl and use it (Arnaldo Carvalho de Melo)

- When the user specifies a freq above kernel.perf_event_max_sample_rate,
  Throttle it down to that max freq, and warn the user about it, add as
  well --strict-freq so that the previous behaviour of not starting the
  session when the desired freq can't be used can be selected (Arnaldo Carvalho de Melo)

- Find 'call' instruction target symbol at parsing time, used so far in
  the TUI, part of the infrastructure changes that will end up allowing
  for jumps to navigate to other functions, just like 'call'
  instructions. (Arnaldo Carvalho de Melo)

- Use xyarray dimensions to iterate fds in 'perf stat' (Andi Kleen)

- Ignore threads for which the current user hasn't permissions when
  enabling system-wide --per-thread (Jin Yao)

- Fix some backtrace perf test cases to use 'perf record' + 'perf script'
  instead, till 'perf trace' starts using ordered_events or equivalent
  to avoid symbol resolving artifacts due to reordering of
  PERF_RECORD_MMAP events (Jiri Olsa)

- Fix crash in 'perf record' pipe mode, it needs to allocate the ID
  array even for a single event, unlike non-pipe mode (Jiri Olsa)

- Make annoying fallback message on older kernels with newer 'perf top'
  binaries trying to use overwrite mode and that not being present
  in the older kernels (Kan Liang)

- Switch last users of old APIs to the newer perf_mmap__read_event()
  one, then discard those old mmap read forward APIs (Kan Liang)

- Fix the usage on the 'perf kallsyms' man page (Sangwon Hong)

- Simplify cgroup arguments when tracking multiple events (weiping zhang)
Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parents 8af31363 6afad54d
...@@ -8,7 +8,7 @@ perf-kallsyms - Searches running kernel for symbols ...@@ -8,7 +8,7 @@ perf-kallsyms - Searches running kernel for symbols
SYNOPSIS SYNOPSIS
-------- --------
[verse] [verse]
'perf kallsyms <options> symbol_name[,symbol_name...]' 'perf kallsyms' [<options>] symbol_name[,symbol_name...]
DESCRIPTION DESCRIPTION
----------- -----------
......
...@@ -191,9 +191,16 @@ OPTIONS ...@@ -191,9 +191,16 @@ OPTIONS
-i:: -i::
--no-inherit:: --no-inherit::
Child tasks do not inherit counters. Child tasks do not inherit counters.
-F:: -F::
--freq=:: --freq=::
Profile at this frequency. Profile at this frequency. Use 'max' to use the currently maximum
allowed frequency, i.e. the value in the kernel.perf_event_max_sample_rate
sysctl. Will throttle down to the currently maximum allowed frequency.
See --strict-freq.
--strict-freq::
Fail if the specified frequency can't be used.
-m:: -m::
--mmap-pages=:: --mmap-pages=::
...@@ -308,7 +315,11 @@ can be provided. Each cgroup is applied to the corresponding event, i.e., first ...@@ -308,7 +315,11 @@ can be provided. Each cgroup is applied to the corresponding event, i.e., first
to first event, second cgroup to second event and so on. It is possible to provide to first event, second cgroup to second event and so on. It is possible to provide
an empty cgroup (monitor all the time) using, e.g., -G foo,,bar. Cgroups must have an empty cgroup (monitor all the time) using, e.g., -G foo,,bar. Cgroups must have
corresponding events, i.e., they always refer to events defined earlier on the command corresponding events, i.e., they always refer to events defined earlier on the command
line. line. If the user wants to track multiple events for a specific cgroup, the user can
use '-e e1 -e e2 -G foo,foo' or just use '-e e1 -e e2 -G foo'.
If wanting to monitor, say, 'cycles' for a cgroup and also for system wide, this
command line can be used: 'perf stat -e cycles -G cgroup_name -a -e cycles'.
-b:: -b::
--branch-any:: --branch-any::
......
...@@ -118,7 +118,11 @@ can be provided. Each cgroup is applied to the corresponding event, i.e., first ...@@ -118,7 +118,11 @@ can be provided. Each cgroup is applied to the corresponding event, i.e., first
to first event, second cgroup to second event and so on. It is possible to provide to first event, second cgroup to second event and so on. It is possible to provide
an empty cgroup (monitor all the time) using, e.g., -G foo,,bar. Cgroups must have an empty cgroup (monitor all the time) using, e.g., -G foo,,bar. Cgroups must have
corresponding events, i.e., they always refer to events defined earlier on the command corresponding events, i.e., they always refer to events defined earlier on the command
line. line. If the user wants to track multiple events for a specific cgroup, the user can
use '-e e1 -e e2 -G foo,foo' or just use '-e e1 -e e2 -G foo'.
If wanting to monitor, say, 'cycles' for a cgroup and also for system wide, this
command line can be used: 'perf stat -e cycles -G cgroup_name -a -e cycles'.
-o file:: -o file::
--output file:: --output file::
......
...@@ -55,7 +55,9 @@ Default is to monitor all CPUS. ...@@ -55,7 +55,9 @@ Default is to monitor all CPUS.
-F <freq>:: -F <freq>::
--freq=<freq>:: --freq=<freq>::
Profile at this frequency. Profile at this frequency. Use 'max' to use the currently maximum
allowed frequency, i.e. the value in the kernel.perf_event_max_sample_rate
sysctl.
-i:: -i::
--inherit:: --inherit::
......
...@@ -60,6 +60,8 @@ int test__perf_time_to_tsc(struct test *test __maybe_unused, int subtest __maybe ...@@ -60,6 +60,8 @@ int test__perf_time_to_tsc(struct test *test __maybe_unused, int subtest __maybe
union perf_event *event; union perf_event *event;
u64 test_tsc, comm1_tsc, comm2_tsc; u64 test_tsc, comm1_tsc, comm2_tsc;
u64 test_time, comm1_time = 0, comm2_time = 0; u64 test_time, comm1_time = 0, comm2_time = 0;
struct perf_mmap *md;
u64 end, start;
threads = thread_map__new(-1, getpid(), UINT_MAX); threads = thread_map__new(-1, getpid(), UINT_MAX);
CHECK_NOT_NULL__(threads); CHECK_NOT_NULL__(threads);
...@@ -109,7 +111,11 @@ int test__perf_time_to_tsc(struct test *test __maybe_unused, int subtest __maybe ...@@ -109,7 +111,11 @@ int test__perf_time_to_tsc(struct test *test __maybe_unused, int subtest __maybe
perf_evlist__disable(evlist); perf_evlist__disable(evlist);
for (i = 0; i < evlist->nr_mmaps; i++) { for (i = 0; i < evlist->nr_mmaps; i++) {
while ((event = perf_evlist__mmap_read(evlist, i)) != NULL) { md = &evlist->mmap[i];
if (perf_mmap__read_init(md, false, &start, &end) < 0)
continue;
while ((event = perf_mmap__read_event(md, false, &start, end)) != NULL) {
struct perf_sample sample; struct perf_sample sample;
if (event->header.type != PERF_RECORD_COMM || if (event->header.type != PERF_RECORD_COMM ||
...@@ -128,8 +134,9 @@ int test__perf_time_to_tsc(struct test *test __maybe_unused, int subtest __maybe ...@@ -128,8 +134,9 @@ int test__perf_time_to_tsc(struct test *test __maybe_unused, int subtest __maybe
comm2_time = sample.time; comm2_time = sample.time;
} }
next_event: next_event:
perf_evlist__mmap_consume(evlist, i); perf_mmap__consume(md, false);
} }
perf_mmap__read_done(md);
} }
if (!comm1_time || !comm2_time) if (!comm1_time || !comm2_time)
......
...@@ -743,16 +743,24 @@ static bool verify_vcpu(int vcpu) ...@@ -743,16 +743,24 @@ static bool verify_vcpu(int vcpu)
static s64 perf_kvm__mmap_read_idx(struct perf_kvm_stat *kvm, int idx, static s64 perf_kvm__mmap_read_idx(struct perf_kvm_stat *kvm, int idx,
u64 *mmap_time) u64 *mmap_time)
{ {
struct perf_evlist *evlist = kvm->evlist;
union perf_event *event; union perf_event *event;
struct perf_mmap *md;
u64 end, start;
u64 timestamp; u64 timestamp;
s64 n = 0; s64 n = 0;
int err; int err;
*mmap_time = ULLONG_MAX; *mmap_time = ULLONG_MAX;
while ((event = perf_evlist__mmap_read(kvm->evlist, idx)) != NULL) { md = &evlist->mmap[idx];
err = perf_evlist__parse_sample_timestamp(kvm->evlist, event, &timestamp); err = perf_mmap__read_init(md, false, &start, &end);
if (err < 0)
return (err == -EAGAIN) ? 0 : -1;
while ((event = perf_mmap__read_event(md, false, &start, end)) != NULL) {
err = perf_evlist__parse_sample_timestamp(evlist, event, &timestamp);
if (err) { if (err) {
perf_evlist__mmap_consume(kvm->evlist, idx); perf_mmap__consume(md, false);
pr_err("Failed to parse sample\n"); pr_err("Failed to parse sample\n");
return -1; return -1;
} }
...@@ -762,7 +770,7 @@ static s64 perf_kvm__mmap_read_idx(struct perf_kvm_stat *kvm, int idx, ...@@ -762,7 +770,7 @@ static s64 perf_kvm__mmap_read_idx(struct perf_kvm_stat *kvm, int idx,
* FIXME: Here we can't consume the event, as perf_session__queue_event will * FIXME: Here we can't consume the event, as perf_session__queue_event will
* point to it, and it'll get possibly overwritten by the kernel. * point to it, and it'll get possibly overwritten by the kernel.
*/ */
perf_evlist__mmap_consume(kvm->evlist, idx); perf_mmap__consume(md, false);
if (err) { if (err) {
pr_err("Failed to enqueue sample: %d\n", err); pr_err("Failed to enqueue sample: %d\n", err);
...@@ -779,6 +787,7 @@ static s64 perf_kvm__mmap_read_idx(struct perf_kvm_stat *kvm, int idx, ...@@ -779,6 +787,7 @@ static s64 perf_kvm__mmap_read_idx(struct perf_kvm_stat *kvm, int idx,
break; break;
} }
perf_mmap__read_done(md);
return n; return n;
} }
......
...@@ -45,6 +45,7 @@ ...@@ -45,6 +45,7 @@
#include <errno.h> #include <errno.h>
#include <inttypes.h> #include <inttypes.h>
#include <locale.h>
#include <poll.h> #include <poll.h>
#include <unistd.h> #include <unistd.h>
#include <sched.h> #include <sched.h>
...@@ -881,6 +882,15 @@ static int __cmd_record(struct record *rec, int argc, const char **argv) ...@@ -881,6 +882,15 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
} }
} }
/*
* If we have just single event and are sending data
* through pipe, we need to force the ids allocation,
* because we synthesize event name through the pipe
* and need the id for that.
*/
if (data->is_pipe && rec->evlist->nr_entries == 1)
rec->opts.sample_id = true;
if (record__open(rec) != 0) { if (record__open(rec) != 0) {
err = -1; err = -1;
goto out_child; goto out_child;
...@@ -1542,7 +1552,11 @@ static struct option __record_options[] = { ...@@ -1542,7 +1552,11 @@ static struct option __record_options[] = {
OPT_BOOLEAN(0, "tail-synthesize", &record.opts.tail_synthesize, OPT_BOOLEAN(0, "tail-synthesize", &record.opts.tail_synthesize,
"synthesize non-sample events at the end of output"), "synthesize non-sample events at the end of output"),
OPT_BOOLEAN(0, "overwrite", &record.opts.overwrite, "use overwrite mode"), OPT_BOOLEAN(0, "overwrite", &record.opts.overwrite, "use overwrite mode"),
OPT_UINTEGER('F', "freq", &record.opts.user_freq, "profile at this frequency"), OPT_BOOLEAN(0, "strict-freq", &record.opts.strict_freq,
"Fail if the specified frequency can't be used"),
OPT_CALLBACK('F', "freq", &record.opts, "freq or 'max'",
"profile at this frequency",
record__parse_freq),
OPT_CALLBACK('m', "mmap-pages", &record.opts, "pages[,pages]", OPT_CALLBACK('m', "mmap-pages", &record.opts, "pages[,pages]",
"number of mmap data pages and AUX area tracing mmap pages", "number of mmap data pages and AUX area tracing mmap pages",
record__parse_mmap_pages), record__parse_mmap_pages),
...@@ -1651,6 +1665,8 @@ int cmd_record(int argc, const char **argv) ...@@ -1651,6 +1665,8 @@ int cmd_record(int argc, const char **argv)
struct record *rec = &record; struct record *rec = &record;
char errbuf[BUFSIZ]; char errbuf[BUFSIZ];
setlocale(LC_ALL, "");
#ifndef HAVE_LIBBPF_SUPPORT #ifndef HAVE_LIBBPF_SUPPORT
# define set_nobuild(s, l, c) set_option_nobuild(record_options, s, l, "NO_LIBBPF=1", c) # define set_nobuild(s, l, c) set_option_nobuild(record_options, s, l, "NO_LIBBPF=1", c)
set_nobuild('\0', "clang-path", true); set_nobuild('\0', "clang-path", true);
......
...@@ -508,14 +508,13 @@ static int perf_stat_synthesize_config(bool is_pipe) ...@@ -508,14 +508,13 @@ static int perf_stat_synthesize_config(bool is_pipe)
#define FD(e, x, y) (*(int *)xyarray__entry(e->fd, x, y)) #define FD(e, x, y) (*(int *)xyarray__entry(e->fd, x, y))
static int __store_counter_ids(struct perf_evsel *counter, static int __store_counter_ids(struct perf_evsel *counter)
struct cpu_map *cpus,
struct thread_map *threads)
{ {
int cpu, thread; int cpu, thread;
for (cpu = 0; cpu < cpus->nr; cpu++) { for (cpu = 0; cpu < xyarray__max_x(counter->fd); cpu++) {
for (thread = 0; thread < threads->nr; thread++) { for (thread = 0; thread < xyarray__max_y(counter->fd);
thread++) {
int fd = FD(counter, cpu, thread); int fd = FD(counter, cpu, thread);
if (perf_evlist__id_add_fd(evsel_list, counter, if (perf_evlist__id_add_fd(evsel_list, counter,
...@@ -535,7 +534,7 @@ static int store_counter_ids(struct perf_evsel *counter) ...@@ -535,7 +534,7 @@ static int store_counter_ids(struct perf_evsel *counter)
if (perf_evsel__alloc_id(counter, cpus->nr, threads->nr)) if (perf_evsel__alloc_id(counter, cpus->nr, threads->nr))
return -ENOMEM; return -ENOMEM;
return __store_counter_ids(counter, cpus, threads); return __store_counter_ids(counter);
} }
static bool perf_evsel__should_store_id(struct perf_evsel *counter) static bool perf_evsel__should_store_id(struct perf_evsel *counter)
...@@ -638,7 +637,19 @@ static int __run_perf_stat(int argc, const char **argv) ...@@ -638,7 +637,19 @@ static int __run_perf_stat(int argc, const char **argv)
if (verbose > 0) if (verbose > 0)
ui__warning("%s\n", msg); ui__warning("%s\n", msg);
goto try_again; goto try_again;
} } else if (target__has_per_thread(&target) &&
evsel_list->threads &&
evsel_list->threads->err_thread != -1) {
/*
* For global --per-thread case, skip current
* error thread.
*/
if (!thread_map__remove(evsel_list->threads,
evsel_list->threads->err_thread)) {
evsel_list->threads->err_thread = -1;
goto try_again;
}
}
perf_evsel__open_strerror(counter, &target, perf_evsel__open_strerror(counter, &target,
errno, msg, sizeof(msg)); errno, msg, sizeof(msg));
......
...@@ -991,7 +991,7 @@ static int perf_top_overwrite_fallback(struct perf_top *top, ...@@ -991,7 +991,7 @@ static int perf_top_overwrite_fallback(struct perf_top *top,
evlist__for_each_entry(evlist, counter) evlist__for_each_entry(evlist, counter)
counter->attr.write_backward = false; counter->attr.write_backward = false;
opts->overwrite = false; opts->overwrite = false;
ui__warning("fall back to non-overwrite mode\n"); pr_debug2("fall back to non-overwrite mode\n");
return 1; return 1;
} }
...@@ -1307,7 +1307,9 @@ int cmd_top(int argc, const char **argv) ...@@ -1307,7 +1307,9 @@ int cmd_top(int argc, const char **argv)
OPT_STRING(0, "sym-annotate", &top.sym_filter, "symbol name", OPT_STRING(0, "sym-annotate", &top.sym_filter, "symbol name",
"symbol to annotate"), "symbol to annotate"),
OPT_BOOLEAN('z', "zero", &top.zero, "zero history across updates"), OPT_BOOLEAN('z', "zero", &top.zero, "zero history across updates"),
OPT_UINTEGER('F', "freq", &opts->user_freq, "profile at this frequency"), OPT_CALLBACK('F', "freq", &top.record_opts, "freq or 'max'",
"profile at this frequency",
record__parse_freq),
OPT_INTEGER('E', "entries", &top.print_entries, OPT_INTEGER('E', "entries", &top.print_entries,
"display this many functions"), "display this many functions"),
OPT_BOOLEAN('U', "hide_user_symbols", &top.hide_user_symbols, OPT_BOOLEAN('U', "hide_user_symbols", &top.hide_user_symbols,
......
...@@ -2472,8 +2472,14 @@ static int trace__run(struct trace *trace, int argc, const char **argv) ...@@ -2472,8 +2472,14 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
for (i = 0; i < evlist->nr_mmaps; i++) { for (i = 0; i < evlist->nr_mmaps; i++) {
union perf_event *event; union perf_event *event;
struct perf_mmap *md;
u64 end, start;
while ((event = perf_evlist__mmap_read(evlist, i)) != NULL) { md = &evlist->mmap[i];
if (perf_mmap__read_init(md, false, &start, &end) < 0)
continue;
while ((event = perf_mmap__read_event(md, false, &start, end)) != NULL) {
struct perf_sample sample; struct perf_sample sample;
++trace->nr_events; ++trace->nr_events;
...@@ -2486,7 +2492,7 @@ static int trace__run(struct trace *trace, int argc, const char **argv) ...@@ -2486,7 +2492,7 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
trace__handle_event(trace, event, &sample); trace__handle_event(trace, event, &sample);
next_event: next_event:
perf_evlist__mmap_consume(evlist, i); perf_mmap__consume(md, false);
if (interrupted) if (interrupted)
goto out_disable; goto out_disable;
...@@ -2496,6 +2502,7 @@ static int trace__run(struct trace *trace, int argc, const char **argv) ...@@ -2496,6 +2502,7 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
draining = true; draining = true;
} }
} }
perf_mmap__read_done(md);
} }
if (trace->nr_events == before) { if (trace->nr_events == before) {
......
...@@ -61,6 +61,8 @@ struct record_opts { ...@@ -61,6 +61,8 @@ struct record_opts {
bool tail_synthesize; bool tail_synthesize;
bool overwrite; bool overwrite;
bool ignore_missing_thread; bool ignore_missing_thread;
bool strict_freq;
bool sample_id;
unsigned int freq; unsigned int freq;
unsigned int mmap_pages; unsigned int mmap_pages;
unsigned int auxtrace_mmap_pages; unsigned int auxtrace_mmap_pages;
...@@ -82,4 +84,6 @@ struct record_opts { ...@@ -82,4 +84,6 @@ struct record_opts {
struct option; struct option;
extern const char * const *record_usage; extern const char * const *record_usage;
extern struct option *record_options; extern struct option *record_options;
int record__parse_freq(const struct option *opt, const char *str, int unset);
#endif #endif
...@@ -176,13 +176,20 @@ static int do_test(struct bpf_object *obj, int (*func)(void), ...@@ -176,13 +176,20 @@ static int do_test(struct bpf_object *obj, int (*func)(void),
for (i = 0; i < evlist->nr_mmaps; i++) { for (i = 0; i < evlist->nr_mmaps; i++) {
union perf_event *event; union perf_event *event;
struct perf_mmap *md;
u64 end, start;
while ((event = perf_evlist__mmap_read(evlist, i)) != NULL) { md = &evlist->mmap[i];
if (perf_mmap__read_init(md, false, &start, &end) < 0)
continue;
while ((event = perf_mmap__read_event(md, false, &start, end)) != NULL) {
const u32 type = event->header.type; const u32 type = event->header.type;
if (type == PERF_RECORD_SAMPLE) if (type == PERF_RECORD_SAMPLE)
count ++; count ++;
} }
perf_mmap__read_done(md);
} }
if (count != expect) { if (count != expect) {
......
...@@ -409,15 +409,22 @@ static int process_events(struct machine *machine, struct perf_evlist *evlist, ...@@ -409,15 +409,22 @@ static int process_events(struct machine *machine, struct perf_evlist *evlist,
struct state *state) struct state *state)
{ {
union perf_event *event; union perf_event *event;
struct perf_mmap *md;
u64 end, start;
int i, ret; int i, ret;
for (i = 0; i < evlist->nr_mmaps; i++) { for (i = 0; i < evlist->nr_mmaps; i++) {
while ((event = perf_evlist__mmap_read(evlist, i)) != NULL) { md = &evlist->mmap[i];
if (perf_mmap__read_init(md, false, &start, &end) < 0)
continue;
while ((event = perf_mmap__read_event(md, false, &start, end)) != NULL) {
ret = process_event(machine, evlist, event, state); ret = process_event(machine, evlist, event, state);
perf_evlist__mmap_consume(evlist, i); perf_mmap__consume(md, false);
if (ret < 0) if (ret < 0)
return ret; return ret;
} }
perf_mmap__read_done(md);
} }
return 0; return 0;
} }
......
...@@ -27,18 +27,24 @@ ...@@ -27,18 +27,24 @@
static int find_comm(struct perf_evlist *evlist, const char *comm) static int find_comm(struct perf_evlist *evlist, const char *comm)
{ {
union perf_event *event; union perf_event *event;
struct perf_mmap *md;
u64 end, start;
int i, found; int i, found;
found = 0; found = 0;
for (i = 0; i < evlist->nr_mmaps; i++) { for (i = 0; i < evlist->nr_mmaps; i++) {
while ((event = perf_evlist__mmap_read(evlist, i)) != NULL) { md = &evlist->mmap[i];
if (perf_mmap__read_init(md, false, &start, &end) < 0)
continue;
while ((event = perf_mmap__read_event(md, false, &start, end)) != NULL) {
if (event->header.type == PERF_RECORD_COMM && if (event->header.type == PERF_RECORD_COMM &&
(pid_t)event->comm.pid == getpid() && (pid_t)event->comm.pid == getpid() &&
(pid_t)event->comm.tid == getpid() && (pid_t)event->comm.tid == getpid() &&
strcmp(event->comm.comm, comm) == 0) strcmp(event->comm.comm, comm) == 0)
found += 1; found += 1;
perf_evlist__mmap_consume(evlist, i); perf_mmap__consume(md, false);
} }
perf_mmap__read_done(md);
} }
return found; return found;
} }
......
...@@ -38,6 +38,8 @@ int test__basic_mmap(struct test *test __maybe_unused, int subtest __maybe_unuse ...@@ -38,6 +38,8 @@ int test__basic_mmap(struct test *test __maybe_unused, int subtest __maybe_unuse
expected_nr_events[nsyscalls], i, j; expected_nr_events[nsyscalls], i, j;
struct perf_evsel *evsels[nsyscalls], *evsel; struct perf_evsel *evsels[nsyscalls], *evsel;
char sbuf[STRERR_BUFSIZE]; char sbuf[STRERR_BUFSIZE];
struct perf_mmap *md;
u64 end, start;
threads = thread_map__new(-1, getpid(), UINT_MAX); threads = thread_map__new(-1, getpid(), UINT_MAX);
if (threads == NULL) { if (threads == NULL) {
...@@ -106,7 +108,11 @@ int test__basic_mmap(struct test *test __maybe_unused, int subtest __maybe_unuse ...@@ -106,7 +108,11 @@ int test__basic_mmap(struct test *test __maybe_unused, int subtest __maybe_unuse
++foo; ++foo;
} }
while ((event = perf_evlist__mmap_read(evlist, 0)) != NULL) { md = &evlist->mmap[0];
if (perf_mmap__read_init(md, false, &start, &end) < 0)
goto out_init;
while ((event = perf_mmap__read_event(md, false, &start, end)) != NULL) {
struct perf_sample sample; struct perf_sample sample;
if (event->header.type != PERF_RECORD_SAMPLE) { if (event->header.type != PERF_RECORD_SAMPLE) {
...@@ -129,9 +135,11 @@ int test__basic_mmap(struct test *test __maybe_unused, int subtest __maybe_unuse ...@@ -129,9 +135,11 @@ int test__basic_mmap(struct test *test __maybe_unused, int subtest __maybe_unuse
goto out_delete_evlist; goto out_delete_evlist;
} }
nr_events[evsel->idx]++; nr_events[evsel->idx]++;
perf_evlist__mmap_consume(evlist, 0); perf_mmap__consume(md, false);
} }
perf_mmap__read_done(md);
out_init:
err = 0; err = 0;
evlist__for_each_entry(evlist, evsel) { evlist__for_each_entry(evlist, evsel) {
if (nr_events[evsel->idx] != expected_nr_events[evsel->idx]) { if (nr_events[evsel->idx] != expected_nr_events[evsel->idx]) {
......
...@@ -86,8 +86,14 @@ int test__syscall_openat_tp_fields(struct test *test __maybe_unused, int subtest ...@@ -86,8 +86,14 @@ int test__syscall_openat_tp_fields(struct test *test __maybe_unused, int subtest
for (i = 0; i < evlist->nr_mmaps; i++) { for (i = 0; i < evlist->nr_mmaps; i++) {
union perf_event *event; union perf_event *event;
struct perf_mmap *md;
u64 end, start;
while ((event = perf_evlist__mmap_read(evlist, i)) != NULL) { md = &evlist->mmap[i];
if (perf_mmap__read_init(md, false, &start, &end) < 0)
continue;
while ((event = perf_mmap__read_event(md, false, &start, end)) != NULL) {
const u32 type = event->header.type; const u32 type = event->header.type;
int tp_flags; int tp_flags;
struct perf_sample sample; struct perf_sample sample;
...@@ -95,7 +101,7 @@ int test__syscall_openat_tp_fields(struct test *test __maybe_unused, int subtest ...@@ -95,7 +101,7 @@ int test__syscall_openat_tp_fields(struct test *test __maybe_unused, int subtest
++nr_events; ++nr_events;
if (type != PERF_RECORD_SAMPLE) { if (type != PERF_RECORD_SAMPLE) {
perf_evlist__mmap_consume(evlist, i); perf_mmap__consume(md, false);
continue; continue;
} }
...@@ -115,6 +121,7 @@ int test__syscall_openat_tp_fields(struct test *test __maybe_unused, int subtest ...@@ -115,6 +121,7 @@ int test__syscall_openat_tp_fields(struct test *test __maybe_unused, int subtest
goto out_ok; goto out_ok;
} }
perf_mmap__read_done(md);
} }
if (nr_events == before) if (nr_events == before)
......
...@@ -164,8 +164,14 @@ int test__PERF_RECORD(struct test *test __maybe_unused, int subtest __maybe_unus ...@@ -164,8 +164,14 @@ int test__PERF_RECORD(struct test *test __maybe_unused, int subtest __maybe_unus
for (i = 0; i < evlist->nr_mmaps; i++) { for (i = 0; i < evlist->nr_mmaps; i++) {
union perf_event *event; union perf_event *event;
struct perf_mmap *md;
u64 end, start;
while ((event = perf_evlist__mmap_read(evlist, i)) != NULL) { md = &evlist->mmap[i];
if (perf_mmap__read_init(md, false, &start, &end) < 0)
continue;
while ((event = perf_mmap__read_event(md, false, &start, end)) != NULL) {
const u32 type = event->header.type; const u32 type = event->header.type;
const char *name = perf_event__name(type); const char *name = perf_event__name(type);
...@@ -266,8 +272,9 @@ int test__PERF_RECORD(struct test *test __maybe_unused, int subtest __maybe_unus ...@@ -266,8 +272,9 @@ int test__PERF_RECORD(struct test *test __maybe_unused, int subtest __maybe_unus
++errs; ++errs;
} }
perf_evlist__mmap_consume(evlist, i); perf_mmap__consume(md, false);
} }
perf_mmap__read_done(md);
} }
/* /*
......
...@@ -15,30 +15,28 @@ nm -g $libc 2>/dev/null | fgrep -q inet_pton || exit 254 ...@@ -15,30 +15,28 @@ nm -g $libc 2>/dev/null | fgrep -q inet_pton || exit 254
trace_libc_inet_pton_backtrace() { trace_libc_inet_pton_backtrace() {
idx=0 idx=0
expected[0]="PING.*bytes" expected[0]="ping[][0-9 \.:]+probe_libc:inet_pton: \([[:xdigit:]]+\)"
expected[1]="64 bytes from ::1.*" expected[1]=".*inet_pton[[:space:]]\($libc\)$"
expected[2]=".*ping statistics.*"
expected[3]=".*packets transmitted.*"
expected[4]="rtt min.*"
expected[5]="[0-9]+\.[0-9]+[[:space:]]+probe_libc:inet_pton:\([[:xdigit:]]+\)"
expected[6]=".*inet_pton[[:space:]]\($libc|inlined\)$"
case "$(uname -m)" in case "$(uname -m)" in
s390x) s390x)
eventattr='call-graph=dwarf' eventattr='call-graph=dwarf'
expected[7]="gaih_inet.*[[:space:]]\($libc|inlined\)$" expected[2]="gaih_inet.*[[:space:]]\($libc|inlined\)$"
expected[8]="__GI_getaddrinfo[[:space:]]\($libc|inlined\)$" expected[3]="__GI_getaddrinfo[[:space:]]\($libc|inlined\)$"
expected[9]="main[[:space:]]\(.*/bin/ping.*\)$" expected[4]="main[[:space:]]\(.*/bin/ping.*\)$"
expected[10]="__libc_start_main[[:space:]]\($libc\)$" expected[5]="__libc_start_main[[:space:]]\($libc\)$"
expected[11]="_start[[:space:]]\(.*/bin/ping.*\)$" expected[6]="_start[[:space:]]\(.*/bin/ping.*\)$"
;; ;;
*) *)
eventattr='max-stack=3' eventattr='max-stack=3'
expected[7]="getaddrinfo[[:space:]]\($libc\)$" expected[2]="getaddrinfo[[:space:]]\($libc\)$"
expected[8]=".*\(.*/bin/ping.*\)$" expected[3]=".*\(.*/bin/ping.*\)$"
;; ;;
esac esac
perf trace --no-syscalls -e probe_libc:inet_pton/$eventattr/ ping -6 -c 1 ::1 2>&1 | grep -v ^$ | while read line ; do file=`mktemp -u /tmp/perf.data.XXX`
perf record -e probe_libc:inet_pton/$eventattr/ -o $file ping -6 -c 1 ::1 > /dev/null 2>&1
perf script -i $file | while read line ; do
echo $line echo $line
echo "$line" | egrep -q "${expected[$idx]}" echo "$line" | egrep -q "${expected[$idx]}"
if [ $? -ne 0 ] ; then if [ $? -ne 0 ] ; then
...@@ -48,6 +46,8 @@ trace_libc_inet_pton_backtrace() { ...@@ -48,6 +46,8 @@ trace_libc_inet_pton_backtrace() {
let idx+=1 let idx+=1
[ -z "${expected[$idx]}" ] && break [ -z "${expected[$idx]}" ] && break
done done
rm -f $file
} }
# Check for IPv6 interface existence # Check for IPv6 interface existence
......
...@@ -39,6 +39,8 @@ static int __test__sw_clock_freq(enum perf_sw_ids clock_id) ...@@ -39,6 +39,8 @@ static int __test__sw_clock_freq(enum perf_sw_ids clock_id)
}; };
struct cpu_map *cpus; struct cpu_map *cpus;
struct thread_map *threads; struct thread_map *threads;
struct perf_mmap *md;
u64 end, start;
attr.sample_freq = 500; attr.sample_freq = 500;
...@@ -93,7 +95,11 @@ static int __test__sw_clock_freq(enum perf_sw_ids clock_id) ...@@ -93,7 +95,11 @@ static int __test__sw_clock_freq(enum perf_sw_ids clock_id)
perf_evlist__disable(evlist); perf_evlist__disable(evlist);
while ((event = perf_evlist__mmap_read(evlist, 0)) != NULL) { md = &evlist->mmap[0];
if (perf_mmap__read_init(md, false, &start, &end) < 0)
goto out_init;
while ((event = perf_mmap__read_event(md, false, &start, end)) != NULL) {
struct perf_sample sample; struct perf_sample sample;
if (event->header.type != PERF_RECORD_SAMPLE) if (event->header.type != PERF_RECORD_SAMPLE)
...@@ -108,9 +114,11 @@ static int __test__sw_clock_freq(enum perf_sw_ids clock_id) ...@@ -108,9 +114,11 @@ static int __test__sw_clock_freq(enum perf_sw_ids clock_id)
total_periods += sample.period; total_periods += sample.period;
nr_samples++; nr_samples++;
next_event: next_event:
perf_evlist__mmap_consume(evlist, 0); perf_mmap__consume(md, false);
} }
perf_mmap__read_done(md);
out_init:
if ((u64) nr_samples == total_periods) { if ((u64) nr_samples == total_periods) {
pr_debug("All (%d) samples have period value of 1!\n", pr_debug("All (%d) samples have period value of 1!\n",
nr_samples); nr_samples);
......
...@@ -258,16 +258,23 @@ static int process_events(struct perf_evlist *evlist, ...@@ -258,16 +258,23 @@ static int process_events(struct perf_evlist *evlist,
unsigned pos, cnt = 0; unsigned pos, cnt = 0;
LIST_HEAD(events); LIST_HEAD(events);
struct event_node *events_array, *node; struct event_node *events_array, *node;
struct perf_mmap *md;
u64 end, start;
int i, ret; int i, ret;
for (i = 0; i < evlist->nr_mmaps; i++) { for (i = 0; i < evlist->nr_mmaps; i++) {
while ((event = perf_evlist__mmap_read(evlist, i)) != NULL) { md = &evlist->mmap[i];
if (perf_mmap__read_init(md, false, &start, &end) < 0)
continue;
while ((event = perf_mmap__read_event(md, false, &start, end)) != NULL) {
cnt += 1; cnt += 1;
ret = add_event(evlist, &events, event); ret = add_event(evlist, &events, event);
perf_evlist__mmap_consume(evlist, i); perf_mmap__consume(md, false);
if (ret < 0) if (ret < 0)
goto out_free_nodes; goto out_free_nodes;
} }
perf_mmap__read_done(md);
} }
events_array = calloc(cnt, sizeof(struct event_node)); events_array = calloc(cnt, sizeof(struct event_node));
......
...@@ -47,6 +47,8 @@ int test__task_exit(struct test *test __maybe_unused, int subtest __maybe_unused ...@@ -47,6 +47,8 @@ int test__task_exit(struct test *test __maybe_unused, int subtest __maybe_unused
char sbuf[STRERR_BUFSIZE]; char sbuf[STRERR_BUFSIZE];
struct cpu_map *cpus; struct cpu_map *cpus;
struct thread_map *threads; struct thread_map *threads;
struct perf_mmap *md;
u64 end, start;
signal(SIGCHLD, sig_handler); signal(SIGCHLD, sig_handler);
...@@ -110,13 +112,19 @@ int test__task_exit(struct test *test __maybe_unused, int subtest __maybe_unused ...@@ -110,13 +112,19 @@ int test__task_exit(struct test *test __maybe_unused, int subtest __maybe_unused
perf_evlist__start_workload(evlist); perf_evlist__start_workload(evlist);
retry: retry:
while ((event = perf_evlist__mmap_read(evlist, 0)) != NULL) { md = &evlist->mmap[0];
if (perf_mmap__read_init(md, false, &start, &end) < 0)
goto out_init;
while ((event = perf_mmap__read_event(md, false, &start, end)) != NULL) {
if (event->header.type == PERF_RECORD_EXIT) if (event->header.type == PERF_RECORD_EXIT)
nr_exit++; nr_exit++;
perf_evlist__mmap_consume(evlist, 0); perf_mmap__consume(md, false);
} }
perf_mmap__read_done(md);
out_init:
if (!exited || !nr_exit) { if (!exited || !nr_exit) {
perf_evlist__poll(evlist, -1); perf_evlist__poll(evlist, -1);
goto retry; goto retry;
......
...@@ -328,7 +328,32 @@ static void annotate_browser__draw_current_jump(struct ui_browser *browser) ...@@ -328,7 +328,32 @@ static void annotate_browser__draw_current_jump(struct ui_browser *browser)
if (!disasm_line__is_valid_jump(cursor, sym)) if (!disasm_line__is_valid_jump(cursor, sym))
return; return;
/*
* This first was seen with a gcc function, _cpp_lex_token, that
* has the usual jumps:
*
* │1159e6c: ↓ jne 115aa32 <_cpp_lex_token@@Base+0xf92>
*
* I.e. jumps to a label inside that function (_cpp_lex_token), and
* those works, but also this kind:
*
* │1159e8b: ↓ jne c469be <cpp_named_operator2name@@Base+0xa72>
*
* I.e. jumps to another function, outside _cpp_lex_token, which
* are not being correctly handled generating as a side effect references
* to ab->offset[] entries that are set to NULL, so to make this code
* more robust, check that here.
*
* A proper fix for will be put in place, looking at the function
* name right after the '<' token and probably treating this like a
* 'call' instruction.
*/
target = ab->offsets[cursor->ops.target.offset]; target = ab->offsets[cursor->ops.target.offset];
if (target == NULL) {
ui_helpline__printf("WARN: jump target inconsistency, press 'o', ab->offsets[%#x] = NULL\n",
cursor->ops.target.offset);
return;
}
bcursor = browser_line(&cursor->al); bcursor = browser_line(&cursor->al);
btarget = browser_line(target); btarget = browser_line(target);
...@@ -543,35 +568,28 @@ static bool annotate_browser__callq(struct annotate_browser *browser, ...@@ -543,35 +568,28 @@ static bool annotate_browser__callq(struct annotate_browser *browser,
struct map_symbol *ms = browser->b.priv; struct map_symbol *ms = browser->b.priv;
struct disasm_line *dl = disasm_line(browser->selection); struct disasm_line *dl = disasm_line(browser->selection);
struct annotation *notes; struct annotation *notes;
struct addr_map_symbol target = {
.map = ms->map,
.addr = map__objdump_2mem(ms->map, dl->ops.target.addr),
};
char title[SYM_TITLE_MAX_SIZE]; char title[SYM_TITLE_MAX_SIZE];
if (!ins__is_call(&dl->ins)) if (!ins__is_call(&dl->ins))
return false; return false;
if (map_groups__find_ams(&target) || if (!dl->ops.target.sym) {
map__rip_2objdump(target.map, target.map->map_ip(target.map,
target.addr)) !=
dl->ops.target.addr) {
ui_helpline__puts("The called function was not found."); ui_helpline__puts("The called function was not found.");
return true; return true;
} }
notes = symbol__annotation(target.sym); notes = symbol__annotation(dl->ops.target.sym);
pthread_mutex_lock(&notes->lock); pthread_mutex_lock(&notes->lock);
if (notes->src == NULL && symbol__alloc_hist(target.sym) < 0) { if (notes->src == NULL && symbol__alloc_hist(dl->ops.target.sym) < 0) {
pthread_mutex_unlock(&notes->lock); pthread_mutex_unlock(&notes->lock);
ui__warning("Not enough memory for annotating '%s' symbol!\n", ui__warning("Not enough memory for annotating '%s' symbol!\n",
target.sym->name); dl->ops.target.sym->name);
return true; return true;
} }
pthread_mutex_unlock(&notes->lock); pthread_mutex_unlock(&notes->lock);
symbol__tui_annotate(target.sym, target.map, evsel, hbt); symbol__tui_annotate(dl->ops.target.sym, ms->map, evsel, hbt);
sym_title(ms->sym, ms->map, title, sizeof(title)); sym_title(ms->sym, ms->map, title, sizeof(title));
ui_browser__show_title(&browser->b, title); ui_browser__show_title(&browser->b, title);
return true; return true;
......
...@@ -2223,7 +2223,7 @@ static int perf_evsel_browser_title(struct hist_browser *browser, ...@@ -2223,7 +2223,7 @@ static int perf_evsel_browser_title(struct hist_browser *browser,
u64 nr_events = hists->stats.total_period; u64 nr_events = hists->stats.total_period;
struct perf_evsel *evsel = hists_to_evsel(hists); struct perf_evsel *evsel = hists_to_evsel(hists);
const char *ev_name = perf_evsel__name(evsel); const char *ev_name = perf_evsel__name(evsel);
char buf[512]; char buf[512], sample_freq_str[64] = "";
size_t buflen = sizeof(buf); size_t buflen = sizeof(buf);
char ref[30] = " show reference callgraph, "; char ref[30] = " show reference callgraph, ";
bool enable_ref = false; bool enable_ref = false;
...@@ -2255,10 +2255,14 @@ static int perf_evsel_browser_title(struct hist_browser *browser, ...@@ -2255,10 +2255,14 @@ static int perf_evsel_browser_title(struct hist_browser *browser,
if (symbol_conf.show_ref_callgraph && if (symbol_conf.show_ref_callgraph &&
strstr(ev_name, "call-graph=no")) strstr(ev_name, "call-graph=no"))
enable_ref = true; enable_ref = true;
if (!is_report_browser(hbt))
scnprintf(sample_freq_str, sizeof(sample_freq_str), " %d Hz,", evsel->attr.sample_freq);
nr_samples = convert_unit(nr_samples, &unit); nr_samples = convert_unit(nr_samples, &unit);
printed = scnprintf(bf, size, printed = scnprintf(bf, size,
"Samples: %lu%c of event '%s',%sEvent count (approx.): %" PRIu64, "Samples: %lu%c of event '%s',%s%sEvent count (approx.): %" PRIu64,
nr_samples, unit, ev_name, enable_ref ? ref : " ", nr_events); nr_samples, unit, ev_name, sample_freq_str, enable_ref ? ref : " ", nr_events);
if (hists->uid_filter_str) if (hists->uid_filter_str)
......
...@@ -187,6 +187,9 @@ bool ins__is_fused(struct arch *arch, const char *ins1, const char *ins2) ...@@ -187,6 +187,9 @@ bool ins__is_fused(struct arch *arch, const char *ins1, const char *ins2)
static int call__parse(struct arch *arch, struct ins_operands *ops, struct map *map) static int call__parse(struct arch *arch, struct ins_operands *ops, struct map *map)
{ {
char *endptr, *tok, *name; char *endptr, *tok, *name;
struct addr_map_symbol target = {
.map = map,
};
ops->target.addr = strtoull(ops->raw, &endptr, 16); ops->target.addr = strtoull(ops->raw, &endptr, 16);
...@@ -208,28 +211,29 @@ static int call__parse(struct arch *arch, struct ins_operands *ops, struct map * ...@@ -208,28 +211,29 @@ static int call__parse(struct arch *arch, struct ins_operands *ops, struct map *
ops->target.name = strdup(name); ops->target.name = strdup(name);
*tok = '>'; *tok = '>';
return ops->target.name == NULL ? -1 : 0; if (ops->target.name == NULL)
return -1;
find_target:
target.addr = map__objdump_2mem(map, ops->target.addr);
indirect_call: if (map_groups__find_ams(&target) == 0 &&
tok = strchr(endptr, '*'); map__rip_2objdump(target.map, map->map_ip(target.map, target.addr)) == ops->target.addr)
if (tok == NULL) { ops->target.sym = target.sym;
struct symbol *sym = map__find_symbol(map, map->map_ip(map, ops->target.addr));
if (sym != NULL)
ops->target.name = strdup(sym->name);
else
ops->target.addr = 0;
return 0;
}
ops->target.addr = strtoull(tok + 1, NULL, 16);
return 0; return 0;
indirect_call:
tok = strchr(endptr, '*');
if (tok != NULL)
ops->target.addr = strtoull(tok + 1, NULL, 16);
goto find_target;
} }
static int call__scnprintf(struct ins *ins, char *bf, size_t size, static int call__scnprintf(struct ins *ins, char *bf, size_t size,
struct ins_operands *ops) struct ins_operands *ops)
{ {
if (ops->target.name) if (ops->target.sym)
return scnprintf(bf, size, "%-6s %s", ins->name, ops->target.name); return scnprintf(bf, size, "%-6s %s", ins->name, ops->target.sym->name);
if (ops->target.addr == 0) if (ops->target.addr == 0)
return ins__raw_scnprintf(ins, bf, size, ops); return ins__raw_scnprintf(ins, bf, size, ops);
...@@ -1283,8 +1287,8 @@ static int symbol__parse_objdump_line(struct symbol *sym, FILE *file, ...@@ -1283,8 +1287,8 @@ static int symbol__parse_objdump_line(struct symbol *sym, FILE *file,
dl->ops.target.offset_avail = true; dl->ops.target.offset_avail = true;
} }
/* kcore has no symbols, so add the call target name */ /* kcore has no symbols, so add the call target symbol */
if (dl->ins.ops && ins__is_call(&dl->ins) && !dl->ops.target.name) { if (dl->ins.ops && ins__is_call(&dl->ins) && !dl->ops.target.sym) {
struct addr_map_symbol target = { struct addr_map_symbol target = {
.map = map, .map = map,
.addr = dl->ops.target.addr, .addr = dl->ops.target.addr,
...@@ -1292,7 +1296,7 @@ static int symbol__parse_objdump_line(struct symbol *sym, FILE *file, ...@@ -1292,7 +1296,7 @@ static int symbol__parse_objdump_line(struct symbol *sym, FILE *file,
if (!map_groups__find_ams(&target) && if (!map_groups__find_ams(&target) &&
target.sym->start == target.al_addr) target.sym->start == target.al_addr)
dl->ops.target.name = strdup(target.sym->name); dl->ops.target.sym = target.sym;
} }
annotation_line__add(&dl->al, &notes->src->source); annotation_line__add(&dl->al, &notes->src->source);
......
...@@ -24,6 +24,7 @@ struct ins_operands { ...@@ -24,6 +24,7 @@ struct ins_operands {
struct { struct {
char *raw; char *raw;
char *name; char *name;
struct symbol *sym;
u64 addr; u64 addr;
s64 offset; s64 offset;
bool offset_avail; bool offset_avail;
......
...@@ -157,9 +157,11 @@ int parse_cgroups(const struct option *opt __maybe_unused, const char *str, ...@@ -157,9 +157,11 @@ int parse_cgroups(const struct option *opt __maybe_unused, const char *str,
int unset __maybe_unused) int unset __maybe_unused)
{ {
struct perf_evlist *evlist = *(struct perf_evlist **)opt->value; struct perf_evlist *evlist = *(struct perf_evlist **)opt->value;
struct perf_evsel *counter;
struct cgroup_sel *cgrp = NULL;
const char *p, *e, *eos = str + strlen(str); const char *p, *e, *eos = str + strlen(str);
char *s; char *s;
int ret; int ret, i;
if (list_empty(&evlist->entries)) { if (list_empty(&evlist->entries)) {
fprintf(stderr, "must define events before cgroups\n"); fprintf(stderr, "must define events before cgroups\n");
...@@ -188,5 +190,18 @@ int parse_cgroups(const struct option *opt __maybe_unused, const char *str, ...@@ -188,5 +190,18 @@ int parse_cgroups(const struct option *opt __maybe_unused, const char *str,
break; break;
str = p+1; str = p+1;
} }
/* for the case one cgroup combine to multiple events */
i = 0;
if (nr_cgroups == 1) {
evlist__for_each_entry(evlist, counter) {
if (i == 0)
cgrp = counter->cgrp;
else {
counter->cgrp = cgrp;
refcount_inc(&cgrp->refcnt);
}
i++;
}
}
return 0; return 0;
} }
...@@ -702,29 +702,6 @@ static int perf_evlist__resume(struct perf_evlist *evlist) ...@@ -702,29 +702,6 @@ static int perf_evlist__resume(struct perf_evlist *evlist)
return perf_evlist__set_paused(evlist, false); return perf_evlist__set_paused(evlist, false);
} }
union perf_event *perf_evlist__mmap_read_forward(struct perf_evlist *evlist, int idx)
{
struct perf_mmap *md = &evlist->mmap[idx];
/*
* Check messup is required for forward overwritable ring buffer:
* memory pointed by md->prev can be overwritten in this case.
* No need for read-write ring buffer: kernel stop outputting when
* it hit md->prev (perf_mmap__consume()).
*/
return perf_mmap__read_forward(md);
}
union perf_event *perf_evlist__mmap_read(struct perf_evlist *evlist, int idx)
{
return perf_evlist__mmap_read_forward(evlist, idx);
}
void perf_evlist__mmap_consume(struct perf_evlist *evlist, int idx)
{
perf_mmap__consume(&evlist->mmap[idx], false);
}
static void perf_evlist__munmap_nofree(struct perf_evlist *evlist) static void perf_evlist__munmap_nofree(struct perf_evlist *evlist)
{ {
int i; int i;
...@@ -761,7 +738,7 @@ static struct perf_mmap *perf_evlist__alloc_mmap(struct perf_evlist *evlist) ...@@ -761,7 +738,7 @@ static struct perf_mmap *perf_evlist__alloc_mmap(struct perf_evlist *evlist)
map[i].fd = -1; map[i].fd = -1;
/* /*
* When the perf_mmap() call is made we grab one refcount, plus * When the perf_mmap() call is made we grab one refcount, plus
* one extra to let perf_evlist__mmap_consume() get the last * one extra to let perf_mmap__consume() get the last
* events after all real references (perf_mmap__get()) are * events after all real references (perf_mmap__get()) are
* dropped. * dropped.
* *
......
...@@ -129,10 +129,6 @@ struct perf_sample_id *perf_evlist__id2sid(struct perf_evlist *evlist, u64 id); ...@@ -129,10 +129,6 @@ struct perf_sample_id *perf_evlist__id2sid(struct perf_evlist *evlist, u64 id);
void perf_evlist__toggle_bkw_mmap(struct perf_evlist *evlist, enum bkw_mmap_state state); void perf_evlist__toggle_bkw_mmap(struct perf_evlist *evlist, enum bkw_mmap_state state);
union perf_event *perf_evlist__mmap_read(struct perf_evlist *evlist, int idx);
union perf_event *perf_evlist__mmap_read_forward(struct perf_evlist *evlist,
int idx);
void perf_evlist__mmap_consume(struct perf_evlist *evlist, int idx); void perf_evlist__mmap_consume(struct perf_evlist *evlist, int idx);
int perf_evlist__open(struct perf_evlist *evlist); int perf_evlist__open(struct perf_evlist *evlist);
......
...@@ -1915,6 +1915,9 @@ int perf_evsel__open(struct perf_evsel *evsel, struct cpu_map *cpus, ...@@ -1915,6 +1915,9 @@ int perf_evsel__open(struct perf_evsel *evsel, struct cpu_map *cpus,
goto fallback_missing_features; goto fallback_missing_features;
} }
out_close: out_close:
if (err)
threads->err_thread = thread;
do { do {
while (--thread >= 0) { while (--thread >= 0) {
close(FD(evsel, cpu, thread)); close(FD(evsel, cpu, thread));
......
...@@ -63,25 +63,6 @@ static union perf_event *perf_mmap__read(struct perf_mmap *map, ...@@ -63,25 +63,6 @@ static union perf_event *perf_mmap__read(struct perf_mmap *map,
return event; return event;
} }
/*
* legacy interface for mmap read.
* Don't use it. Use perf_mmap__read_event().
*/
union perf_event *perf_mmap__read_forward(struct perf_mmap *map)
{
u64 head;
/*
* Check if event was unmapped due to a POLLHUP/POLLERR.
*/
if (!refcount_read(&map->refcnt))
return NULL;
head = perf_mmap__read_head(map);
return perf_mmap__read(map, &map->prev, head);
}
/* /*
* Read event from ring buffer one by one. * Read event from ring buffer one by one.
* Return one event for each call. * Return one event for each call.
...@@ -191,7 +172,7 @@ void perf_mmap__munmap(struct perf_mmap *map) ...@@ -191,7 +172,7 @@ void perf_mmap__munmap(struct perf_mmap *map)
int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int fd) int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int fd)
{ {
/* /*
* The last one will be done at perf_evlist__mmap_consume(), so that we * The last one will be done at perf_mmap__consume(), so that we
* make sure we don't prevent tools from consuming every last event in * make sure we don't prevent tools from consuming every last event in
* the ring buffer. * the ring buffer.
* *
......
...@@ -983,13 +983,19 @@ static PyObject *pyrf_evlist__read_on_cpu(struct pyrf_evlist *pevlist, ...@@ -983,13 +983,19 @@ static PyObject *pyrf_evlist__read_on_cpu(struct pyrf_evlist *pevlist,
union perf_event *event; union perf_event *event;
int sample_id_all = 1, cpu; int sample_id_all = 1, cpu;
static char *kwlist[] = { "cpu", "sample_id_all", NULL }; static char *kwlist[] = { "cpu", "sample_id_all", NULL };
struct perf_mmap *md;
u64 end, start;
int err; int err;
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "i|i", kwlist, if (!PyArg_ParseTupleAndKeywords(args, kwargs, "i|i", kwlist,
&cpu, &sample_id_all)) &cpu, &sample_id_all))
return NULL; return NULL;
event = perf_evlist__mmap_read(evlist, cpu); md = &evlist->mmap[cpu];
if (perf_mmap__read_init(md, false, &start, &end) < 0)
goto end;
event = perf_mmap__read_event(md, false, &start, end);
if (event != NULL) { if (event != NULL) {
PyObject *pyevent = pyrf_event__new(event); PyObject *pyevent = pyrf_event__new(event);
struct pyrf_event *pevent = (struct pyrf_event *)pyevent; struct pyrf_event *pevent = (struct pyrf_event *)pyevent;
...@@ -1007,14 +1013,14 @@ static PyObject *pyrf_evlist__read_on_cpu(struct pyrf_evlist *pevlist, ...@@ -1007,14 +1013,14 @@ static PyObject *pyrf_evlist__read_on_cpu(struct pyrf_evlist *pevlist,
err = perf_evsel__parse_sample(evsel, event, &pevent->sample); err = perf_evsel__parse_sample(evsel, event, &pevent->sample);
/* Consume the even only after we parsed it out. */ /* Consume the even only after we parsed it out. */
perf_evlist__mmap_consume(evlist, cpu); perf_mmap__consume(md, false);
if (err) if (err)
return PyErr_Format(PyExc_OSError, return PyErr_Format(PyExc_OSError,
"perf: can't parse sample, err=%d", err); "perf: can't parse sample, err=%d", err);
return pyevent; return pyevent;
} }
end:
Py_INCREF(Py_None); Py_INCREF(Py_None);
return Py_None; return Py_None;
} }
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
#include "parse-events.h" #include "parse-events.h"
#include <errno.h> #include <errno.h>
#include <api/fs/fs.h> #include <api/fs/fs.h>
#include <subcmd/parse-options.h>
#include "util.h" #include "util.h"
#include "cloexec.h" #include "cloexec.h"
...@@ -137,6 +138,7 @@ void perf_evlist__config(struct perf_evlist *evlist, struct record_opts *opts, ...@@ -137,6 +138,7 @@ void perf_evlist__config(struct perf_evlist *evlist, struct record_opts *opts,
struct perf_evsel *evsel; struct perf_evsel *evsel;
bool use_sample_identifier = false; bool use_sample_identifier = false;
bool use_comm_exec; bool use_comm_exec;
bool sample_id = opts->sample_id;
/* /*
* Set the evsel leader links before we configure attributes, * Set the evsel leader links before we configure attributes,
...@@ -163,8 +165,7 @@ void perf_evlist__config(struct perf_evlist *evlist, struct record_opts *opts, ...@@ -163,8 +165,7 @@ void perf_evlist__config(struct perf_evlist *evlist, struct record_opts *opts,
* match the id. * match the id.
*/ */
use_sample_identifier = perf_can_sample_identifier(); use_sample_identifier = perf_can_sample_identifier();
evlist__for_each_entry(evlist, evsel) sample_id = true;
perf_evsel__set_sample_id(evsel, use_sample_identifier);
} else if (evlist->nr_entries > 1) { } else if (evlist->nr_entries > 1) {
struct perf_evsel *first = perf_evlist__first(evlist); struct perf_evsel *first = perf_evlist__first(evlist);
...@@ -174,6 +175,10 @@ void perf_evlist__config(struct perf_evlist *evlist, struct record_opts *opts, ...@@ -174,6 +175,10 @@ void perf_evlist__config(struct perf_evlist *evlist, struct record_opts *opts,
use_sample_identifier = perf_can_sample_identifier(); use_sample_identifier = perf_can_sample_identifier();
break; break;
} }
sample_id = true;
}
if (sample_id) {
evlist__for_each_entry(evlist, evsel) evlist__for_each_entry(evlist, evsel)
perf_evsel__set_sample_id(evsel, use_sample_identifier); perf_evsel__set_sample_id(evsel, use_sample_identifier);
} }
...@@ -215,11 +220,21 @@ static int record_opts__config_freq(struct record_opts *opts) ...@@ -215,11 +220,21 @@ static int record_opts__config_freq(struct record_opts *opts)
* User specified frequency is over current maximum. * User specified frequency is over current maximum.
*/ */
if (user_freq && (max_rate < opts->freq)) { if (user_freq && (max_rate < opts->freq)) {
pr_err("Maximum frequency rate (%u) reached.\n" if (opts->strict_freq) {
"Please use -F freq option with lower value or consider\n" pr_err("error: Maximum frequency rate (%'u Hz) exceeded.\n"
"tweaking /proc/sys/kernel/perf_event_max_sample_rate.\n", " Please use -F freq option with a lower value or consider\n"
max_rate); " tweaking /proc/sys/kernel/perf_event_max_sample_rate.\n",
return -1; max_rate);
return -1;
} else {
pr_warning("warning: Maximum frequency rate (%'u Hz) exceeded, throttling from %'u Hz to %'u Hz.\n"
" The limit can be raised via /proc/sys/kernel/perf_event_max_sample_rate.\n"
" The kernel will lower it when perf's interrupts take too long.\n"
" Use --strict-freq to disable this throttling, refusing to record.\n",
max_rate, opts->freq, max_rate);
opts->freq = max_rate;
}
} }
/* /*
...@@ -287,3 +302,25 @@ bool perf_evlist__can_select_event(struct perf_evlist *evlist, const char *str) ...@@ -287,3 +302,25 @@ bool perf_evlist__can_select_event(struct perf_evlist *evlist, const char *str)
perf_evlist__delete(temp_evlist); perf_evlist__delete(temp_evlist);
return ret; return ret;
} }
int record__parse_freq(const struct option *opt, const char *str, int unset __maybe_unused)
{
unsigned int freq;
struct record_opts *opts = opt->value;
if (!str)
return -EINVAL;
if (strcasecmp(str, "max") == 0) {
if (get_max_rate(&freq)) {
pr_err("couldn't read /proc/sys/kernel/perf_event_max_sample_rate\n");
return -1;
}
pr_info("info: Using a maximum frequency rate of %'d Hz\n", freq);
} else {
freq = atoi(str);
}
opts->user_freq = freq;
return 0;
}
...@@ -32,6 +32,7 @@ static void thread_map__reset(struct thread_map *map, int start, int nr) ...@@ -32,6 +32,7 @@ static void thread_map__reset(struct thread_map *map, int start, int nr)
size_t size = (nr - start) * sizeof(map->map[0]); size_t size = (nr - start) * sizeof(map->map[0]);
memset(&map->map[start], 0, size); memset(&map->map[start], 0, size);
map->err_thread = -1;
} }
static struct thread_map *thread_map__realloc(struct thread_map *map, int nr) static struct thread_map *thread_map__realloc(struct thread_map *map, int nr)
......
...@@ -14,6 +14,7 @@ struct thread_map_data { ...@@ -14,6 +14,7 @@ struct thread_map_data {
struct thread_map { struct thread_map {
refcount_t refcnt; refcount_t refcnt;
int nr; int nr;
int err_thread;
struct thread_map_data map[]; struct thread_map_data map[];
}; };
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment