1. 30 Apr, 2020 5 commits
    • Zou Wei's avatar
      libtraceevent: Remove unneeded semicolon · eebe80c9
      Zou Wei authored
      Fixes coccicheck warning:
      
       tools/lib/traceevent/kbuffer-parse.c:441:2-3: Unneeded semicolon
      Reported-by: default avatarHulk Robot <hulkci@huawei.com>
      Signed-off-by: default avatarZou Wei <zou_wei@huawei.com>
      Acked-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      Link: http://lore.kernel.org/lkml/1588065121-71236-1-git-send-email-zou_wei@huawei.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      eebe80c9
    • Stephane Eranian's avatar
      perf script: Remove extraneous newline in perf_sample__fprintf_regs() · fad1f1e7
      Stephane Eranian authored
      When printing iregs, there was a double newline printed because
      perf_sample__fprintf_regs() was printing its own and then at the end of
      all fields, perf script was adding one.  This was causing blank line in
      the output:
      
      Before:
      
        $ perf script -Fip,iregs
                   401b8d ABI:2    DX:0x100    SI:0x4a8340    DI:0x4a9340
      
                   401b8d ABI:2    DX:0x100    SI:0x4a9340    DI:0x4a8340
      
                   401b8d ABI:2    DX:0x100    SI:0x4a8340    DI:0x4a9340
      
                   401b8d ABI:2    DX:0x100    SI:0x4a9340    DI:0x4a8340
      
      After:
      
        $ perf script -Fip,iregs
                   401b8d ABI:2    DX:0x100    SI:0x4a8340    DI:0x4a9340
                   401b8d ABI:2    DX:0x100    SI:0x4a9340    DI:0x4a8340
                   401b8d ABI:2    DX:0x100    SI:0x4a8340    DI:0x4a9340
      
      Committer testing:
      
      First we need to figure out how to request that registers be recorded,
      so we use:
      
        # perf record -h reg
      
         Usage: perf record [<options>] [<command>]
            or: perf record [<options>] -- <command> [<options>]
      
            -I, --intr-regs[=<any register>]
                                  sample selected machine registers on interrupt, use '-I?' to list register names
                --buildid-all     Record build-id of all DSOs regardless of hits
                --user-regs[=<any register>]
                                  sample selected machine registers on interrupt, use '--user-regs=?' to list register names
      
        #
      
      Ok, now lets ask for them all:
      
        # perf record -a --intr-regs --user-regs sleep 1
        [ perf record: Woken up 1 times to write data ]
        [ perf record: Captured and wrote 4.105 MB perf.data (2760 samples) ]
        #
      
      Lets look at the first 6 output lines:
      
        # perf script -Fip,iregs | head -6
         ffffffff8a06f2f4 ABI:2    AX:0xffffd168fee0a980    BX:0xffff8a23b087f000    CX:0xfffeb69aaeb25d73    DX:0xffff8a253e8310f0    SI:0xfffffff9bafe7359    DI:0xffffb1690204fb10    BP:0xffffd168fee0a950    SP:0xffffb1690204fb88    IP:0xffffffff8a06f2f4 FLAGS:0x4e    CS:0x10    SS:0x18    R8:0x1495f0a91129a    R9:0xffff8a23b087f000   R10:0x1   R11:0xffffffff   R12:0x0   R13:0xffff8a253e827e00   R14:0xffffd168fee0aa5c   R15:0xffffd168fee0a980
      
         ffffffff8a06f2f4 ABI:2    AX:0x0    BX:0xffffd168fee0a950    CX:0x5684cc1118491900    DX:0x0    SI:0xffffd168fee0a9d0    DI:0x202    BP:0xffffb1690204fd70    SP:0xffffb1690204fd20    IP:0xffffffff8a06f2f4 FLAGS:0x24e    CS:0x10    SS:0x18    R8:0x0    R9:0xffffd168fee0a9d0   R10:0x1   R11:0xffffffff   R12:0xffffffff8a23e480   R13:0xffff8a23b087f240   R14:0xffff8a23b087f000   R15:0xffffd168fee0a950
      
         ffffffff8a06f2f4 ABI:2    AX:0x0    BX:0x0    CX:0x7f25f334335b    DX:0x0    SI:0x2400    DI:0x4    BP:0x7fff5f264570    SP:0x7fff5f264538    IP:0xffffffff8a06f2f4 FLAGS:0x24e    CS:0x10    SS:0x2b    R8:0x0    R9:0x2312d20   R10:0x0   R11:0x246   R12:0x22cc0e0   R13:0x0   R14:0x0   R15:0x22d0780
      
        #
      
      Reproduced, apply the patch and:
      
      [root@five ~]# perf script -Fip,iregs | head -6
       ffffffff8a06f2f4 ABI:2    AX:0xffffd168fee0a980    BX:0xffff8a23b087f000    CX:0xfffeb69aaeb25d73    DX:0xffff8a253e8310f0    SI:0xfffffff9bafe7359    DI:0xffffb1690204fb10    BP:0xffffd168fee0a950    SP:0xffffb1690204fb88    IP:0xffffffff8a06f2f4 FLAGS:0x4e    CS:0x10    SS:0x18    R8:0x1495f0a91129a    R9:0xffff8a23b087f000   R10:0x1   R11:0xffffffff   R12:0x0   R13:0xffff8a253e827e00   R14:0xffffd168fee0aa5c   R15:0xffffd168fee0a980
       ffffffff8a06f2f4 ABI:2    AX:0x0    BX:0xffffd168fee0a950    CX:0x5684cc1118491900    DX:0x0    SI:0xffffd168fee0a9d0    DI:0x202    BP:0xffffb1690204fd70    SP:0xffffb1690204fd20    IP:0xffffffff8a06f2f4 FLAGS:0x24e    CS:0x10    SS:0x18    R8:0x0    R9:0xffffd168fee0a9d0   R10:0x1   R11:0xffffffff   R12:0xffffffff8a23e480   R13:0xffff8a23b087f240   R14:0xffff8a23b087f000   R15:0xffffd168fee0a950
       ffffffff8a06f2f4 ABI:2    AX:0x0    BX:0x0    CX:0x7f25f334335b    DX:0x0    SI:0x2400    DI:0x4    BP:0x7fff5f264570    SP:0x7fff5f264538    IP:0xffffffff8a06f2f4 FLAGS:0x24e    CS:0x10    SS:0x2b    R8:0x0    R9:0x2312d20   R10:0x0   R11:0x246   R12:0x22cc0e0   R13:0x0   R14:0x0   R15:0x22d0780
       ffffffff8a24074b ABI:2    AX:0xcb    BX:0xcb    CX:0x0    DX:0x0    SI:0xffffb1690204ff58    DI:0xcb    BP:0xffffb1690204ff58    SP:0xffffb1690204ff40    IP:0xffffffff8a24074b FLAGS:0x24e    CS:0x10    SS:0x18    R8:0x0    R9:0x0   R10:0x0   R11:0x0   R12:0x0   R13:0x0   R14:0x0   R15:0x0
       ffffffff8a310600 ABI:2    AX:0x0    BX:0xffffffff8b8c39a0    CX:0x0    DX:0xffff8a2503890300    SI:0xffffb1690204ff20    DI:0xffff8a23e4080000    BP:0xffff8a23e4080000    SP:0xffffb1690204fec0    IP:0xffffffff8a310600 FLAGS:0x28e    CS:0x10    SS:0x18    R8:0x0    R9:0x0   R10:0x0   R11:0x0   R12:0xffffffffffffffea   R13:0xffff8a23e4080020   R14:0x0   R15:0x0
       ffffffff8a11b688 ABI:2    AX:0x0    BX:0xffff8a237b7c8800    CX:0xffffb1690204fae0    DX:0x78    SI:0xffff8a237b7c8800    DI:0xffffb1690204fa10    BP:0xffffb1690204fb00    SP:0xffffb1690204fa00    IP:0xffffffff8a11b688 FLAGS:0x8a    CS:0x10    SS:0x18    R8:0x1495f0a917eba    R9:0xffffd168fde19a48   R10:0xffffb1690204fd98   R11:0xffff8a253e82afb0   R12:0xffff8a237b7c8800   R13:0xffffb1690204fb00   R14:0x0   R15:0xffff8a237b7c8800
      [root@five ~]#
      
      To see it more clearly, lets get just two of those registers by sample:
      
        # perf record -a --intr-regs=ax,bx --user-regs=cx,dx sleep 1
        [ perf record: Woken up 1 times to write data ]
        [ perf record: Captured and wrote 3.502 MB perf.data (1653 samples) ]
        #
      
      Extra info, lets see what gets setup in that 'struct perf_event_attr':
      
        # perf evlist -v
        cycles: size: 120, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|REGS_USER|REGS_INTR, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 2, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, sample_regs_user: 0xc, sample_regs_intr: 0x3
        #
      
      Cook, some PERF_SAMPLE_REGS_USER|PERF_SAMPLE_REGS_INTR +
      attr.sample_regs_user and attr.sample_regs_intr register masks, now lets
      see if those newlines are gone in a more compact fashion:
      
        # perf script -Fip,iregs,uregs
         ffffffff8a56df78 ABI:2    AX:0xffff8a25137b6028    BX:0xffff8a2502f18000  ABI:2    CX:0x7f204460e49b    DX:0xf42920
         ffffffff8a56df78 ABI:2    AX:0xffff8a25137b6028    BX:0xffff8a2502f18000  ABI:2    CX:0x7f204460e49b    DX:0xf42920
         ffffffff8a56df78 ABI:2    AX:0xffff8a25137b6028    BX:0xffff8a2502f18000  ABI:2    CX:0x7f204460e49b    DX:0xf42920
         ffffffff8a56df78 ABI:2    AX:0xffff8a25137b6028    BX:0xffff8a2502f18000  ABI:2    CX:0x7f204460e49b    DX:0xf42920
         ffffffff8a56df78 ABI:2    AX:0xffff8a25137b6028    BX:0xffff8a2502f18000  ABI:2    CX:0x7f204460e49b    DX:0xf42920
         ffffffff8a56df78 ABI:2    AX:0xffff8a25137b6028    BX:0xffff8a2502f18000  ABI:2    CX:0x7f204460e49b    DX:0xf42920
         ffffffff8a29b78d ABI:2    AX:0x2a20ffcd6000    BX:0x2ec7d9000  ABI:2    CX:0x7f204460e49b    DX:0xf42920
        #
      
      And where was that?
      
        # perf script -Fip,iregs,uregs,sym,dso
         ffffffff8a56df78 strrchr (/lib/modules/5.7.0-rc2/build/vmlinux) ABI:2    AX:0xffff8a25137b6028    BX:0xffff8a2502f18000  ABI:2    CX:0x7f204460e49b    DX:0xf42920
         ffffffff8a56df78 strrchr (/lib/modules/5.7.0-rc2/build/vmlinux) ABI:2    AX:0xffff8a25137b6028    BX:0xffff8a2502f18000  ABI:2    CX:0x7f204460e49b    DX:0xf42920
         ffffffff8a56df78 strrchr (/lib/modules/5.7.0-rc2/build/vmlinux) ABI:2    AX:0xffff8a25137b6028    BX:0xffff8a2502f18000  ABI:2    CX:0x7f204460e49b    DX:0xf42920
         ffffffff8a56df78 strrchr (/lib/modules/5.7.0-rc2/build/vmlinux) ABI:2    AX:0xffff8a25137b6028    BX:0xffff8a2502f18000  ABI:2    CX:0x7f204460e49b    DX:0xf42920
         ffffffff8a56df78 strrchr (/lib/modules/5.7.0-rc2/build/vmlinux) ABI:2    AX:0xffff8a25137b6028    BX:0xffff8a2502f18000  ABI:2    CX:0x7f204460e49b    DX:0xf42920
         ffffffff8a56df78 strrchr (/lib/modules/5.7.0-rc2/build/vmlinux) ABI:2    AX:0xffff8a25137b6028    BX:0xffff8a2502f18000  ABI:2    CX:0x7f204460e49b    DX:0xf42920
         ffffffff8a29b78d __vma_link_rb (/lib/modules/5.7.0-rc2/build/vmlinux) ABI:2    AX:0x2a20ffcd6000    BX:0x2ec7d9000  ABI:2    CX:0x7f204460e49b    DX:0xf42920
        #
      Signed-off-by: default avatarStephane Eranian <eranian@google.com>
      Tested-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Ian Rogers <irogers@google.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lore.kernel.org/lkml/20200418231908.152212-1-eranian@google.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      fad1f1e7
    • Ian Rogers's avatar
      perf synthetic events: Remove use of sscanf from /proc reading · 2069425e
      Ian Rogers authored
      The synthesize benchmark, run on a single process and thread, shows
      perf_event__synthesize_mmap_events as the hottest function with fgets
      and sscanf taking the majority of execution time.
      
      fscanf performs similarly well. Replace the scanf call with manual
      reading of each field of the /proc/pid/maps line, and remove some
      unnecessary buffering.
      
      This change also addresses potential, but unlikely, buffer overruns for
      the string values read by scanf.
      
      Performance before is:
      
        $ sudo perf bench internals synthesize -m 16 -M 16 -s -t
        \# Running 'internals/synthesize' benchmark:
        Computing performance of single threaded perf event synthesis by
        synthesizing events on the perf process itself:
          Average synthesis took: 102.810 usec (+- 0.027 usec)
          Average num. events: 17.000 (+- 0.000)
          Average time per event 6.048 usec
          Average data synthesis took: 106.325 usec (+- 0.018 usec)
          Average num. events: 89.000 (+- 0.000)
          Average time per event 1.195 usec
        Computing performance of multi threaded perf event synthesis by
        synthesizing events on CPU 0:
          Number of synthesis threads: 16
            Average synthesis took: 68103.100 usec (+- 441.234 usec)
            Average num. events: 30703.000 (+- 0.730)
            Average time per event 2.218 usec
      
      And after is:
      
        $ sudo perf bench internals synthesize -m 16 -M 16 -s -t
        \# Running 'internals/synthesize' benchmark:
        Computing performance of single threaded perf event synthesis by
        synthesizing events on the perf process itself:
          Average synthesis took: 50.388 usec (+- 0.031 usec)
          Average num. events: 17.000 (+- 0.000)
          Average time per event 2.964 usec
          Average data synthesis took: 52.693 usec (+- 0.020 usec)
          Average num. events: 89.000 (+- 0.000)
          Average time per event 0.592 usec
        Computing performance of multi threaded perf event synthesis by
        synthesizing events on CPU 0:
          Number of synthesis threads: 16
            Average synthesis took: 45022.400 usec (+- 552.740 usec)
            Average num. events: 30624.200 (+- 10.037)
            Average time per event 1.470 usec
      
      On a Intel Xeon 6154 compiling with Debian gcc 9.2.1.
      
      Committer testing:
      
      On a AMD Ryzen 5 3600X 6-Core Processor:
      
      Before:
      
        # perf bench internals synthesize --min-threads 12 --max-threads 12 --st --mt
        # Running 'internals/synthesize' benchmark:
        Computing performance of single threaded perf event synthesis by
        synthesizing events on the perf process itself:
          Average synthesis took: 267.491 usec (+- 0.176 usec)
          Average num. events: 56.000 (+- 0.000)
          Average time per event 4.777 usec
          Average data synthesis took: 277.257 usec (+- 0.169 usec)
          Average num. events: 287.000 (+- 0.000)
          Average time per event 0.966 usec
        Computing performance of multi threaded perf event synthesis by
        synthesizing events on CPU 0:
          Number of synthesis threads: 12
            Average synthesis took: 81599.500 usec (+- 346.315 usec)
            Average num. events: 36096.100 (+- 2.523)
            Average time per event 2.261 usec
        #
      
      After:
      
        # perf bench internals synthesize --min-threads 12 --max-threads 12 --st --mt
        # Running 'internals/synthesize' benchmark:
        Computing performance of single threaded perf event synthesis by
        synthesizing events on the perf process itself:
          Average synthesis took: 110.125 usec (+- 0.080 usec)
          Average num. events: 56.000 (+- 0.000)
          Average time per event 1.967 usec
          Average data synthesis took: 118.518 usec (+- 0.057 usec)
          Average num. events: 287.000 (+- 0.000)
          Average time per event 0.413 usec
        Computing performance of multi threaded perf event synthesis by
        synthesizing events on CPU 0:
          Number of synthesis threads: 12
            Average synthesis took: 43490.700 usec (+- 284.527 usec)
            Average num. events: 37028.500 (+- 0.563)
            Average time per event 1.175 usec
        #
      Signed-off-by: default avatarIan Rogers <irogers@google.com>
      Tested-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Acked-by: default avatarJiri Olsa <jolsa@redhat.com>
      Acked-by: default avatarNamhyung Kim <namhyung@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andrey Zhizhikin <andrey.z@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lore.kernel.org/lkml/20200415054050.31645-4-irogers@google.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      2069425e
    • Ian Rogers's avatar
      tools api: Add a lightweight buffered reading api · e95770af
      Ian Rogers authored
      The synthesize benchmark shows the majority of execution time going to
      fgets and sscanf, necessary to parse /proc/pid/maps. Add a new buffered
      reading library that will be used to replace these calls in a follow-up
      CL. Add tests for the library to perf test.
      
      Committer tests:
      
        $ perf test api
        63: Test api io                                           : Ok
        $
      Signed-off-by: default avatarIan Rogers <irogers@google.com>
      Acked-by: default avatarJiri Olsa <jolsa@redhat.com>
      Acked-by: default avatarNamhyung Kim <namhyung@kernel.org>
      Tested-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andrey Zhizhikin <andrey.z@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lore.kernel.org/lkml/20200415054050.31645-3-irogers@google.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      e95770af
    • Ian Rogers's avatar
      perf bench: Add a multi-threaded synthesize benchmark · 13edc237
      Ian Rogers authored
      By default this isn't run as it reads /proc and may not have access.
      For consistency, modify the single threaded benchmark to compute an
      average time per event.
      
      Committer testing:
      
        $ grep -m1 "model name" /proc/cpuinfo
        model name	: Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
        $ grep "model name" /proc/cpuinfo  | wc -l
        8
        $
        $ perf bench internals synthesize -h
        # Running 'internals/synthesize' benchmark:
      
         Usage: perf bench internals synthesize <options>
      
            -I, --multi-iterations <n>
                                  Number of iterations used to compute multi-threaded average
            -i, --single-iterations <n>
                                  Number of iterations used to compute single-threaded average
            -M, --max-threads <n>
                                  Maximum number of threads in multithreaded bench
            -m, --min-threads <n>
                                  Minimum number of threads in multithreaded bench
            -s, --st              Run single threaded benchmark
            -t, --mt              Run multi-threaded benchmark
      
        $
        $ perf bench internals synthesize -t
        # Running 'internals/synthesize' benchmark:
        Computing performance of multi threaded perf event synthesis by
        synthesizing events on CPU 0:
          Number of synthesis threads: 1
            Average synthesis took: 65449.000 usec (+- 586.442 usec)
            Average num. events: 9405.400 (+- 0.306)
            Average time per event 6.959 usec
          Number of synthesis threads: 2
            Average synthesis took: 37838.300 usec (+- 130.259 usec)
            Average num. events: 9501.800 (+- 20.469)
            Average time per event 3.982 usec
          Number of synthesis threads: 3
            Average synthesis took: 48551.400 usec (+- 225.686 usec)
            Average num. events: 9544.000 (+- 0.000)
            Average time per event 5.087 usec
          Number of synthesis threads: 4
            Average synthesis took: 29632.500 usec (+- 50.808 usec)
            Average num. events: 9544.000 (+- 0.000)
            Average time per event 3.105 usec
          Number of synthesis threads: 5
            Average synthesis took: 33920.400 usec (+- 284.509 usec)
            Average num. events: 9544.000 (+- 0.000)
            Average time per event 3.554 usec
          Number of synthesis threads: 6
            Average synthesis took: 27604.100 usec (+- 72.344 usec)
            Average num. events: 9548.000 (+- 0.000)
            Average time per event 2.891 usec
          Number of synthesis threads: 7
            Average synthesis took: 25406.300 usec (+- 933.371 usec)
            Average num. events: 9545.500 (+- 0.167)
            Average time per event 2.662 usec
          Number of synthesis threads: 8
            Average synthesis took: 24110.400 usec (+- 73.229 usec)
            Average num. events: 9551.000 (+- 0.000)
            Average time per event 2.524 usec
        $
      Signed-off-by: default avatarIan Rogers <irogers@google.com>
      Tested-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Acked-by: default avatarJiri Olsa <jolsa@redhat.com>
      Acked-by: default avatarNamhyung Kim <namhyung@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andrey Zhizhikin <andrey.z@gmail.com>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lore.kernel.org/lkml/20200415054050.31645-2-irogers@google.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      13edc237
  2. 23 Apr, 2020 3 commits
    • Stephane Eranian's avatar
      perf record: Add num-synthesize-threads option · d99c22ea
      Stephane Eranian authored
      To control degree of parallelism of the synthesize_mmap() code which
      is scanning /proc/PID/task/PID/maps and can be time consuming.
      Mimic perf top way of handling the option.
      If not specified will default to 1 thread, i.e. default behavior before
      this option.
      
      On a desktop computer the processing of /proc/PID/task/PID/maps isn't
      slow enough to warrant parallel processing and the thread creation has
      some cost - hence the default of 1. On a loaded server with
      >100 cores it is possible to see synthesis times in the order of
      seconds and in this case having the option is desirable.
      
      As the processing is a synchronization point, it is legitimate to worry if
      Amdahl's law will apply to this patch. Profiling with this patch in
      place:
      https://lore.kernel.org/lkml/20200415054050.31645-4-irogers@google.com/
      shows:
      ...
            - 32.59% __perf_event__synthesize_threads
               - 32.54% __event__synthesize_thread
                  + 22.13% perf_event__synthesize_mmap_events
                  + 6.68% perf_event__get_comm_ids.constprop.0
                  + 1.49% process_synthesized_event
                  + 1.29% __GI___readdir64
                  + 0.60% __opendir
      ...
      That is the processing is 1.49% of execution time and there is plenty to
      make parallel. This is shown in the benchmark in this patch:
      
      https://lore.kernel.org/lkml/20200415054050.31645-2-irogers@google.com/
      
        Computing performance of multi threaded perf event synthesis by
        synthesizing events on CPU 0:
         Number of synthesis threads: 1
           Average synthesis took: 127729.000 usec (+- 3372.880 usec)
           Average num. events: 21548.600 (+- 0.306)
           Average time per event 5.927 usec
         Number of synthesis threads: 2
           Average synthesis took: 88863.500 usec (+- 385.168 usec)
           Average num. events: 21552.800 (+- 0.327)
           Average time per event 4.123 usec
         Number of synthesis threads: 3
           Average synthesis took: 83257.400 usec (+- 348.617 usec)
           Average num. events: 21553.200 (+- 0.327)
           Average time per event 3.863 usec
         Number of synthesis threads: 4
           Average synthesis took: 75093.000 usec (+- 422.978 usec)
           Average num. events: 21554.200 (+- 0.200)
           Average time per event 3.484 usec
         Number of synthesis threads: 5
           Average synthesis took: 64896.600 usec (+- 353.348 usec)
           Average num. events: 21558.000 (+- 0.000)
           Average time per event 3.010 usec
         Number of synthesis threads: 6
           Average synthesis took: 59210.200 usec (+- 342.890 usec)
           Average num. events: 21560.000 (+- 0.000)
           Average time per event 2.746 usec
         Number of synthesis threads: 7
           Average synthesis took: 54093.900 usec (+- 306.247 usec)
           Average num. events: 21562.000 (+- 0.000)
           Average time per event 2.509 usec
         Number of synthesis threads: 8
           Average synthesis took: 48938.700 usec (+- 341.732 usec)
           Average num. events: 21564.000 (+- 0.000)
           Average time per event 2.269 usec
      
      Where average time per synthesized event goes from 5.927 usec with 1
      thread to 2.269 usec with 8. This isn't a linear speed up as not all of
      synthesize code has been made parallel. If the synthesis time was about
      10 seconds then using 8 threads may bring this down to less than 4.
      Signed-off-by: default avatarStephane Eranian <eranian@google.com>
      Reviewed-by: default avatarIan Rogers <irogers@google.com>
      Acked-by: default avatarJiri Olsa <jolsa@redhat.com>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tony Jones <tonyj@suse.de>
      Cc: yuzhoujian <yuzhoujian@didichuxing.com>
      Link: http://lore.kernel.org/lkml/20200422155038.9380-1-irogers@google.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      d99c22ea
    • Tommi Rantala's avatar
      perf test session topology: Fix data path · dbd660e6
      Tommi Rantala authored
      Commit 2d4f2799 ("perf data: Add global path holder") missed path
      conversion in tests/topology.c, causing the "Session topology" testcase
      to "hang" (waits forever for input from stdin) when doing "ssh $VM perf
      test".
      
      Can be reproduced by running "cat | perf test topo", and crashed by
      replacing cat with true:
      
        $ true | perf test -v topo
        40: Session topology                                      :
        --- start ---
        test child forked, pid 3638
        templ file: /tmp/perf-test-QPvAch
        incompatible file format
        incompatible file format (rerun with -v to learn more)
        free(): invalid pointer
        test child interrupted
        ---- end ----
        Session topology: FAILED!
      
      Committer testing:
      
      Reproduced the above result before the patch and after it is back
      working:
      
        # true | perf test -v topo
        41: Session topology                                      :
        --- start ---
        test child forked, pid 19374
        templ file: /tmp/perf-test-YOTEQg
        CPU 0, core 0, socket 0
        CPU 1, core 1, socket 0
        CPU 2, core 2, socket 0
        CPU 3, core 3, socket 0
        CPU 4, core 0, socket 0
        CPU 5, core 1, socket 0
        CPU 6, core 2, socket 0
        CPU 7, core 3, socket 0
        test child finished with 0
        ---- end ----
        Session topology: Ok
        #
      
      Fixes: 2d4f2799 ("perf data: Add global path holder")
      Signed-off-by: default avatarTommi Rantala <tommi.t.rantala@nokia.com>
      Tested-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Acked-by: default avatarJiri Olsa <jolsa@redhat.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Mamatha Inamdar <mamatha4@linux.vnet.ibm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
      Link: http://lore.kernel.org/lkml/20200423115341.562782-1-tommi.t.rantala@nokia.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      dbd660e6
    • Jin Yao's avatar
      perf stat: Improve runtime stat for interval mode · 197ba86f
      Jin Yao authored
      For interval mode, the metric is printed after the '#' character if it
      exists. But it's not calculated by the counts generated in this
      interval.
      
      See the following examples:
      
        root@kbl-ppc:~# perf stat -M CPI -I1000 --interval-count 2
        #           time             counts unit events
             1.000422803            764,809      inst_retired.any          #      2.9 CPI
             1.000422803          2,234,932      cycles
             2.001464585          1,960,061      inst_retired.any          #      1.6 CPI
             2.001464585          4,022,591      cycles
      
      The second CPI should not be 1.6 (4,022,591/1,960,061 is 2.1)
      
        root@kbl-ppc:~# perf stat -e cycles,instructions -I1000 --interval-count 2
        #           time             counts unit events
             1.000429493          2,869,311      cycles
             1.000429493            816,875      instructions              #    0.28  insn per cycle
             2.001516426          9,260,973      cycles
             2.001516426          5,250,634      instructions              #    0.87  insn per cycle
      
      The second 'insn per cycle' should not be 0.87 (5,250,634/9,260,973 is
      0.57).
      
      The current code uses a global variable 'rt_stat' for tracking and
      updating the std dev of runtime stat. Unlike the counts, 'rt_stat' is not
      reset for interval. While the counts are reset for interval.
      
        perf_stat_process_counter()
        {
                if (config->interval)
                        init_stats(ps->res_stats);
        }
      
      So for interval mode, the 'rt_stat' variable should be reset too.
      
      This patch resets 'rt_stat' before read_counters(), so the runtime stat
      is only calculated by the counts generated in this interval.
      
      With this patch:
      
        root@kbl-ppc:~# perf stat -M CPI -I1000 --interval-count 2
        #           time             counts unit events
             1.000420924          2,408,818      inst_retired.any          #      2.1 CPI
             1.000420924          5,010,111      cycles
             2.001448579          2,798,407      inst_retired.any          #      1.6 CPI
             2.001448579          4,599,861      cycles
      
        root@kbl-ppc:~# perf stat -e cycles,instructions -I1000 --interval-count 2
        #           time             counts unit events
             1.000428555          2,769,714      cycles
             1.000428555            774,462      instructions              #    0.28  insn per cycle
             2.001471562          3,595,904      cycles
             2.001471562          1,243,703      instructions              #    0.35  insn per cycle
      
      Now the second 'insn per cycle' and CPI are calculated by the counts
      generated in this interval.
      Signed-off-by: default avatarJin Yao <yao.jin@linux.intel.com>
      Acked-by: default avatarJiri Olsa <jolsa@redhat.com>
      Tested-By: default avatarKajol Jain <kjain@linux.ibm.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Jin Yao <yao.jin@intel.com>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lore.kernel.org/lkml/20200420145417.6864-1-yao.jin@linux.intel.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      197ba86f
  3. 22 Apr, 2020 6 commits
  4. 21 Apr, 2020 22 commits
  5. 20 Apr, 2020 4 commits