1. 12 Oct, 2022 1 commit
  2. 06 Oct, 2022 4 commits
    • Steven Rostedt (Google)'s avatar
      ftrace: Create separate entry in MAINTAINERS for function hooks · 4f881a69
      Steven Rostedt (Google) authored
      The function hooks (ftrace) is a completely different subsystem from the
      general tracing. It manages how to attach callbacks to most functions in
      the kernel. It is also used by live kernel patching. It really is not part
      of tracing, although tracing uses it.
      
      Create a separate entry for FUNCTION HOOKS (FTRACE) to be separate from
      tracing itself in the MAINTAINERS file.
      
      Perhaps it should be moved out of the kernel/trace directory, but that's
      for another time.
      
      Link: https://lkml.kernel.org/r/20221006144439.459272364@goodmis.org
      
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      4f881a69
    • Steven Rostedt (Google)'s avatar
      tracing: Update MAINTAINERS to reflect new tracing git repo · fb17b268
      Steven Rostedt (Google) authored
      The tracing git repo will no longer be housed in my personal git repo,
      but instead live in trace/linux-trace.git.
      
      Update the MAINTAINERS file appropriately.
      
      Link: https://lkml.kernel.org/r/20221006144439.282193367@goodmis.org
      
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      fb17b268
    • Steven Rostedt (Google)'s avatar
      tracing: Do not free snapshot if tracer is on cmdline · a541a955
      Steven Rostedt (Google) authored
      The ftrace_boot_snapshot and alloc_snapshot cmdline options allocate the
      snapshot buffer at boot up for use later. The ftrace_boot_snapshot in
      particular requires the snapshot to be allocated because it will take a
      snapshot at the end of boot up allowing to see the traces that happened
      during boot so that it's not lost when user space takes over.
      
      When a tracer is registered (started) there's a path that checks if it
      requires the snapshot buffer or not, and if it does not and it was
      allocated it will do a synchronization and free the snapshot buffer.
      
      This is only required if the previous tracer was using it for "max
      latency" snapshots, as it needs to make sure all max snapshots are
      complete before freeing. But this is only needed if the previous tracer
      was using the snapshot buffer for latency (like irqoff tracer and
      friends). But it does not make sense to free it, if the previous tracer
      was not using it, and the snapshot was allocated by the cmdline
      parameters. This basically takes away the point of allocating it in the
      first place!
      
      Note, the allocated snapshot worked fine for just trace events, but fails
      when a tracer is enabled on the cmdline.
      
      Further investigation, this goes back even further and it does not require
      a tracer on the cmdline to fail. Simply enable snapshots and then enable a
      tracer, and it will remove the snapshot.
      
      Link: https://lkml.kernel.org/r/20221005113757.041df7fe@gandalf.local.home
      
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: stable@vger.kernel.org
      Fixes: 45ad21ca ("tracing: Have trace_array keep track if snapshot buffer is allocated")
      Reported-by: default avatarRoss Zwisler <zwisler@kernel.org>
      Tested-by: default avatarRoss Zwisler <zwisler@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      a541a955
    • Steven Rostedt (Google)'s avatar
      ftrace: Still disable enabled records marked as disabled · cf04f2d5
      Steven Rostedt (Google) authored
      Weak functions started causing havoc as they showed up in the
      "available_filter_functions" and this confused people as to why some
      functions marked as "notrace" were listed, but when enabled they did
      nothing. This was because weak functions can still have fentry calls, and
      these addresses get added to the "available_filter_functions" file.
      kallsyms is what converts those addresses to names, and since the weak
      functions are not listed in kallsyms, it would just pick the function
      before that.
      
      To solve this, there was a trick to detect weak functions listed, and
      these records would be marked as DISABLED so that they do not get enabled
      and are mostly ignored. As the processing of the list of all functions to
      figure out what is weak or not can take a long time, this process is put
      off into a kernel thread and run in parallel with the rest of start up.
      
      Now the issue happens whet function tracing is enabled via the kernel
      command line. As it starts very early in boot up, it can be enabled before
      the records that are weak are marked to be disabled. This causes an issue
      in the accounting, as the weak records are enabled by the command line
      function tracing, but after boot up, they are not disabled.
      
      The ftrace records have several accounting flags and a ref count. The
      DISABLED flag is just one. If the record is enabled before it is marked
      DISABLED it will get an ENABLED flag and also have its ref counter
      incremented. After it is marked for DISABLED, neither the ENABLED flag nor
      the ref counter is cleared. There's sanity checks on the records that are
      performed after an ftrace function is registered or unregistered, and this
      detected that there were records marked as ENABLED with ref counter that
      should not have been.
      
      Note, the module loading code uses the DISABLED flag as well to keep its
      functions from being modified while its being loaded and some of these
      flags may get set in this process. So changing the verification code to
      ignore DISABLED records is a no go, as it still needs to verify that the
      module records are working too.
      
      Also, the weak functions still are calling a trampoline. Even though they
      should never be called, it is dangerous to leave these weak functions
      calling a trampoline that is freed, so they should still be set back to
      nops.
      
      There's two places that need to not skip records that have the ENABLED
      and the DISABLED flags set. That is where the ftrace_ops is processed and
      sets the records ref counts, and then later when the function itself is to
      be updated, and the ENABLED flag gets removed. Add a helper function
      "skip_record()" that returns true if the record has the DISABLED flag set
      but not the ENABLED flag.
      
      Link: https://lkml.kernel.org/r/20221005003809.27d2b97b@gandalf.local.home
      
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: stable@vger.kernel.org
      Fixes: b39181f7 ("ftrace: Add FTRACE_MCOUNT_MAX_OFFSET to avoid adding weak function")
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      cf04f2d5
  3. 03 Oct, 2022 4 commits
  4. 29 Sep, 2022 12 commits
  5. 27 Sep, 2022 9 commits
  6. 26 Sep, 2022 10 commits
    • Xiu Jianfeng's avatar
      rv/monitor: Add __init/__exit annotations to module init/exit funcs · 834168fb
      Xiu Jianfeng authored
      Add missing __init/__exit annotations to module init/exit funcs.
      
      Link: https://lkml.kernel.org/r/20220922103208.162869-1-xiujianfeng@huawei.com
      
      Fixes: 24bce201 ("tools/rv: Add dot2k")
      Fixes: 8812d212 ("rv/monitor: Add the wip monitor skeleton created by dot2k")
      Fixes: ccc319dc ("rv/monitor: Add the wwnr monitor")
      Signed-off-by: default avatarXiu Jianfeng <xiujianfeng@huawei.com>
      Acked-by: default avatarDaniel Bristot de Oliveira <bristot@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      834168fb
    • Nico Pache's avatar
      tracing/osnoise: Fix possible recursive locking in stop_per_cpu_kthreads · 99ee9317
      Nico Pache authored
      There is a recursive lock on the cpu_hotplug_lock.
      
      In kernel/trace/trace_osnoise.c:<start/stop>_per_cpu_kthreads:
          - start_per_cpu_kthreads calls cpus_read_lock() and if
      	start_kthreads returns a error it will call stop_per_cpu_kthreads.
          - stop_per_cpu_kthreads then calls cpus_read_lock() again causing
            deadlock.
      
      Fix this by calling cpus_read_unlock() before calling
      stop_per_cpu_kthreads. This behavior can also be seen in commit
      f46b1652 ("trace/hwlat: Implement the per-cpu mode").
      
      This error was noticed during the LTP ftrace-stress-test:
      
      WARNING: possible recursive locking detected
      --------------------------------------------
      sh/275006 is trying to acquire lock:
      ffffffffb02f5400 (cpu_hotplug_lock){++++}-{0:0}, at: stop_per_cpu_kthreads
      
      but task is already holding lock:
      ffffffffb02f5400 (cpu_hotplug_lock){++++}-{0:0}, at: start_per_cpu_kthreads
      
      other info that might help us debug this:
       Possible unsafe locking scenario:
      
            CPU0
            ----
       lock(cpu_hotplug_lock);
       lock(cpu_hotplug_lock);
      
       *** DEADLOCK ***
      
      May be due to missing lock nesting notation
      
      3 locks held by sh/275006:
       #0: ffff8881023f0470 (sb_writers#24){.+.+}-{0:0}, at: ksys_write
       #1: ffffffffb084f430 (trace_types_lock){+.+.}-{3:3}, at: rb_simple_write
       #2: ffffffffb02f5400 (cpu_hotplug_lock){++++}-{0:0}, at: start_per_cpu_kthreads
      
      Link: https://lkml.kernel.org/r/20220919144932.3064014-1-npache@redhat.com
      
      Fixes: c8895e27 ("trace/osnoise: Support hotplug operations")
      Signed-off-by: default avatarNico Pache <npache@redhat.com>
      Acked-by: default avatarDaniel Bristot de Oliveira <bristot@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      99ee9317
    • Yipeng Zou's avatar
      tracing: kprobe: Make gen test module work in arm and riscv · d8ef45d6
      Yipeng Zou authored
      For now, this selftest module can only work in x86 because of the
      kprobe cmd was fixed use of x86 registers.
      This patch adapted to register names under arm and riscv, So that
      this module can be worked on those platform.
      
      Link: https://lkml.kernel.org/r/20220919125629.238242-3-zouyipeng@huawei.com
      
      Cc: <linux-riscv@lists.infradead.org>
      Cc: <mingo@redhat.com>
      Cc: <paul.walmsley@sifive.com>
      Cc: <palmer@dabbelt.com>
      Cc: <aou@eecs.berkeley.edu>
      Cc: <zanussi@kernel.org>
      Cc: <liaochang1@huawei.com>
      Cc: <chris.zjh@huawei.com>
      Fixes: 64836248 ("tracing: Add kprobe event command generation test module")
      Signed-off-by: default avatarYipeng Zou <zouyipeng@huawei.com>
      Acked-by: default avatarMasami Hiramatsu (Google) <mhiramat@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      d8ef45d6
    • Yipeng Zou's avatar
      tracing: kprobe: Fix kprobe event gen test module on exit · ac48e189
      Yipeng Zou authored
      Correct gen_kretprobe_test clr event para on module exit.
      This will make it can't to delete.
      
      Link: https://lkml.kernel.org/r/20220919125629.238242-2-zouyipeng@huawei.com
      
      Cc: <linux-riscv@lists.infradead.org>
      Cc: <mingo@redhat.com>
      Cc: <paul.walmsley@sifive.com>
      Cc: <palmer@dabbelt.com>
      Cc: <aou@eecs.berkeley.edu>
      Cc: <zanussi@kernel.org>
      Cc: <liaochang1@huawei.com>
      Cc: <chris.zjh@huawei.com>
      Fixes: 64836248 ("tracing: Add kprobe event command generation test module")
      Signed-off-by: default avatarYipeng Zou <zouyipeng@huawei.com>
      Acked-by: default avatarMasami Hiramatsu (Google) <mhiramat@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      ac48e189
    • Gaosheng Cui's avatar
      x86/kprobes: Remove unused arch_kprobe_override_function() declaration · d4940b84
      Gaosheng Cui authored
      All uses of arch_kprobe_override_function() have been removed by
      commit 540adea3 ("error-injection: Separate error-injection
      from kprobe"), so remove the declaration, too.
      
      Link: https://lkml.kernel.org/r/20220914110437.1436353-3-cuigaosheng1@huawei.com
      
      Cc: <mingo@redhat.com>
      Cc: <tglx@linutronix.de>
      Cc: <bp@alien8.de>
      Cc: <dave.hansen@linux.intel.com>
      Cc: <x86@kernel.org>
      Cc: <hpa@zytor.com>
      Cc: <mhiramat@kernel.org>
      Cc: <peterz@infradead.org>
      Cc: <ast@kernel.org>
      Signed-off-by: default avatarGaosheng Cui <cuigaosheng1@huawei.com>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      d4940b84
    • Gaosheng Cui's avatar
      x86/ftrace: Remove unused modifying_ftrace_code declaration · 40d81137
      Gaosheng Cui authored
      All uses of modifying_ftrace_code have been removed by
      commit 768ae440 ("x86/ftrace: Use text_poke()"),
      so remove the declaration, too.
      
      Link: https://lkml.kernel.org/r/20220914110437.1436353-2-cuigaosheng1@huawei.com
      
      Cc: <mingo@redhat.com>
      Cc: <tglx@linutronix.de>
      Cc: <bp@alien8.de>
      Cc: <dave.hansen@linux.intel.com>
      Cc: <x86@kernel.org>
      Cc: <hpa@zytor.com>
      Cc: <mhiramat@kernel.org>
      Cc: <peterz@infradead.org>
      Cc: <ast@kernel.org>
      Signed-off-by: default avatarGaosheng Cui <cuigaosheng1@huawei.com>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      40d81137
    • Zhen Lei's avatar
      tracepoint: Optimize the critical region of mutex_lock in tracepoint_module_coming() · 51714678
      Zhen Lei authored
      The memory allocation of 'tp_mod' does not require mutex_lock()
      protection, move it out.
      
      Link: https://lkml.kernel.org/r/20220914061416.1630-1-thunder.leizhen@huawei.comSigned-off-by: default avatarZhen Lei <thunder.leizhen@huawei.com>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      51714678
    • Steven Rostedt (Google)'s avatar
      tracing/filter: Call filter predicate functions directly via a switch statement · fde59ab1
      Steven Rostedt (Google) authored
      Due to retpolines, indirect calls are much more expensive than direct
      calls. The filters have a select set of functions it uses for the
      predicates. Instead of using function pointers to call them, create a
      filter_pred_fn_call() function that uses a switch statement to call the
      predicate functions directly. This gives almost a 10% speedup to the
      filter logic.
      
      Using the histogram benchmark:
      
      Before:
      
       # event histogram
       #
       # trigger info: hist:keys=delta:vals=hitcount:sort=delta:size=2048 if delta > 0 [active]
       #
      
      { delta:        113 } hitcount:        272
      { delta:        114 } hitcount:        840
      { delta:        118 } hitcount:        344
      { delta:        119 } hitcount:      25428
      { delta:        120 } hitcount:     350590
      { delta:        121 } hitcount:    1892484
      { delta:        122 } hitcount:    6205004
      { delta:        123 } hitcount:   11583521
      { delta:        124 } hitcount:   37590979
      { delta:        125 } hitcount:  108308504
      { delta:        126 } hitcount:  131672461
      { delta:        127 } hitcount:   88700598
      { delta:        128 } hitcount:   65939870
      { delta:        129 } hitcount:   45055004
      { delta:        130 } hitcount:   33174464
      { delta:        131 } hitcount:   31813493
      { delta:        132 } hitcount:   29011676
      { delta:        133 } hitcount:   22798782
      { delta:        134 } hitcount:   22072486
      { delta:        135 } hitcount:   17034113
      { delta:        136 } hitcount:    8982490
      { delta:        137 } hitcount:    2865908
      { delta:        138 } hitcount:     980382
      { delta:        139 } hitcount:    1651944
      { delta:        140 } hitcount:    4112073
      { delta:        141 } hitcount:    3963269
      { delta:        142 } hitcount:    1712508
      { delta:        143 } hitcount:     575941
      
      After:
      
       # event histogram
       #
       # trigger info: hist:keys=delta:vals=hitcount:sort=delta:size=2048 if delta > 0 [active]
       #
      
      { delta:        103 } hitcount:         60
      { delta:        104 } hitcount:      16966
      { delta:        105 } hitcount:     396625
      { delta:        106 } hitcount:    3223400
      { delta:        107 } hitcount:   12053754
      { delta:        108 } hitcount:   20241711
      { delta:        109 } hitcount:   14850200
      { delta:        110 } hitcount:    4946599
      { delta:        111 } hitcount:    3479315
      { delta:        112 } hitcount:   18698299
      { delta:        113 } hitcount:   62388733
      { delta:        114 } hitcount:   95803834
      { delta:        115 } hitcount:   58278130
      { delta:        116 } hitcount:   15364800
      { delta:        117 } hitcount:    5586866
      { delta:        118 } hitcount:    2346880
      { delta:        119 } hitcount:    1131091
      { delta:        120 } hitcount:     620896
      { delta:        121 } hitcount:     236652
      { delta:        122 } hitcount:     105957
      { delta:        123 } hitcount:     119107
      { delta:        124 } hitcount:      54494
      { delta:        125 } hitcount:      63856
      { delta:        126 } hitcount:      64454
      { delta:        127 } hitcount:      34818
      { delta:        128 } hitcount:      41446
      { delta:        129 } hitcount:      51242
      { delta:        130 } hitcount:      28361
      { delta:        131 } hitcount:      23926
      
      The peak before was 126ns per event, after the peak is 114ns, and the
      fastest time went from 113ns to 103ns.
      
      Link: https://lkml.kernel.org/r/20220906225529.781407172@goodmis.org
      
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Tom Zanussi <zanussi@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      fde59ab1
    • Steven Rostedt (Google)'s avatar
      tracing: Move struct filter_pred into trace_events_filter.c · 26c4e3d1
      Steven Rostedt (Google) authored
      The structure filter_pred and the typedef of the function used are only
      referenced by trace_events_filter.c. There's no reason to have it in an
      external header file. Move them into the only file they are used in.
      
      Link: https://lkml.kernel.org/r/20220906225529.598047132@goodmis.org
      
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Tom Zanussi <zanussi@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      26c4e3d1
    • Steven Rostedt (Google)'s avatar
      tracing/hist: Call hist functions directly via a switch statement · 86087383
      Steven Rostedt (Google) authored
      Due to retpolines, indirect calls are much more expensive than direct
      calls. The histograms have a select set of functions it uses for the
      histograms, instead of using function pointers to call them, create a
      hist_fn_call() function that uses a switch statement to call the histogram
      functions directly. This gives a 13% speedup to the histogram logic.
      
      Using the histogram benchmark:
      
      Before:
      
       # event histogram
       #
       # trigger info: hist:keys=delta:vals=hitcount:sort=delta:size=2048 if delta > 0 [active]
       #
      
      { delta:        129 } hitcount:       2213
      { delta:        130 } hitcount:     285965
      { delta:        131 } hitcount:    1146545
      { delta:        132 } hitcount:    51854322
      { delta:        133 } hitcount:   19896215
      { delta:        134 } hitcount:   53118616
      { delta:        135 } hitcount:   83816709
      { delta:        136 } hitcount:   68329562
      { delta:        137 } hitcount:   41859349
      { delta:        138 } hitcount:   46257797
      { delta:        139 } hitcount:   54400831
      { delta:        140 } hitcount:   72875007
      { delta:        141 } hitcount:   76193272
      { delta:        142 } hitcount:   49504263
      { delta:        143 } hitcount:   38821072
      { delta:        144 } hitcount:   47702679
      { delta:        145 } hitcount:   41357297
      { delta:        146 } hitcount:   22058238
      { delta:        147 } hitcount:    9720002
      { delta:        148 } hitcount:    3193542
      { delta:        149 } hitcount:     927030
      { delta:        150 } hitcount:     850772
      { delta:        151 } hitcount:    1477380
      { delta:        152 } hitcount:    2687977
      { delta:        153 } hitcount:    2865985
      { delta:        154 } hitcount:    1977492
      { delta:        155 } hitcount:    2475607
      { delta:        156 } hitcount:    3403612
      
      After:
      
       # event histogram
       #
       # trigger info: hist:keys=delta:vals=hitcount:sort=delta:size=2048 if delta > 0 [active]
       #
      
      { delta:        113 } hitcount:        272
      { delta:        114 } hitcount:        840
      { delta:        118 } hitcount:        344
      { delta:        119 } hitcount:      25428
      { delta:        120 } hitcount:     350590
      { delta:        121 } hitcount:    1892484
      { delta:        122 } hitcount:    6205004
      { delta:        123 } hitcount:   11583521
      { delta:        124 } hitcount:   37590979
      { delta:        125 } hitcount:  108308504
      { delta:        126 } hitcount:  131672461
      { delta:        127 } hitcount:   88700598
      { delta:        128 } hitcount:   65939870
      { delta:        129 } hitcount:   45055004
      { delta:        130 } hitcount:   33174464
      { delta:        131 } hitcount:   31813493
      { delta:        132 } hitcount:   29011676
      { delta:        133 } hitcount:   22798782
      { delta:        134 } hitcount:   22072486
      { delta:        135 } hitcount:   17034113
      { delta:        136 } hitcount:    8982490
      { delta:        137 } hitcount:    2865908
      { delta:        138 } hitcount:     980382
      { delta:        139 } hitcount:    1651944
      { delta:        140 } hitcount:    4112073
      { delta:        141 } hitcount:    3963269
      { delta:        142 } hitcount:    1712508
      { delta:        143 } hitcount:     575941
      { delta:        144 } hitcount:     351427
      { delta:        145 } hitcount:     218077
      { delta:        146 } hitcount:     167297
      { delta:        147 } hitcount:     146198
      { delta:        148 } hitcount:     116122
      { delta:        149 } hitcount:      58993
      { delta:        150 } hitcount:      40228
      
      The delta above is in nanoseconds. It brings the fastest time down from
      129ns to 113ns, and the peak from 141ns to 126ns.
      
      Link: https://lkml.kernel.org/r/20220906225529.411545333@goodmis.org
      
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Tom Zanussi <zanussi@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      86087383