1. 12 Oct, 2022 4 commits
  2. 06 Oct, 2022 4 commits
    • Steven Rostedt (Google)'s avatar
      ftrace: Create separate entry in MAINTAINERS for function hooks · 4f881a69
      Steven Rostedt (Google) authored
      The function hooks (ftrace) is a completely different subsystem from the
      general tracing. It manages how to attach callbacks to most functions in
      the kernel. It is also used by live kernel patching. It really is not part
      of tracing, although tracing uses it.
      
      Create a separate entry for FUNCTION HOOKS (FTRACE) to be separate from
      tracing itself in the MAINTAINERS file.
      
      Perhaps it should be moved out of the kernel/trace directory, but that's
      for another time.
      
      Link: https://lkml.kernel.org/r/20221006144439.459272364@goodmis.org
      
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      4f881a69
    • Steven Rostedt (Google)'s avatar
      tracing: Update MAINTAINERS to reflect new tracing git repo · fb17b268
      Steven Rostedt (Google) authored
      The tracing git repo will no longer be housed in my personal git repo,
      but instead live in trace/linux-trace.git.
      
      Update the MAINTAINERS file appropriately.
      
      Link: https://lkml.kernel.org/r/20221006144439.282193367@goodmis.org
      
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      fb17b268
    • Steven Rostedt (Google)'s avatar
      tracing: Do not free snapshot if tracer is on cmdline · a541a955
      Steven Rostedt (Google) authored
      The ftrace_boot_snapshot and alloc_snapshot cmdline options allocate the
      snapshot buffer at boot up for use later. The ftrace_boot_snapshot in
      particular requires the snapshot to be allocated because it will take a
      snapshot at the end of boot up allowing to see the traces that happened
      during boot so that it's not lost when user space takes over.
      
      When a tracer is registered (started) there's a path that checks if it
      requires the snapshot buffer or not, and if it does not and it was
      allocated it will do a synchronization and free the snapshot buffer.
      
      This is only required if the previous tracer was using it for "max
      latency" snapshots, as it needs to make sure all max snapshots are
      complete before freeing. But this is only needed if the previous tracer
      was using the snapshot buffer for latency (like irqoff tracer and
      friends). But it does not make sense to free it, if the previous tracer
      was not using it, and the snapshot was allocated by the cmdline
      parameters. This basically takes away the point of allocating it in the
      first place!
      
      Note, the allocated snapshot worked fine for just trace events, but fails
      when a tracer is enabled on the cmdline.
      
      Further investigation, this goes back even further and it does not require
      a tracer on the cmdline to fail. Simply enable snapshots and then enable a
      tracer, and it will remove the snapshot.
      
      Link: https://lkml.kernel.org/r/20221005113757.041df7fe@gandalf.local.home
      
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: stable@vger.kernel.org
      Fixes: 45ad21ca ("tracing: Have trace_array keep track if snapshot buffer is allocated")
      Reported-by: default avatarRoss Zwisler <zwisler@kernel.org>
      Tested-by: default avatarRoss Zwisler <zwisler@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      a541a955
    • Steven Rostedt (Google)'s avatar
      ftrace: Still disable enabled records marked as disabled · cf04f2d5
      Steven Rostedt (Google) authored
      Weak functions started causing havoc as they showed up in the
      "available_filter_functions" and this confused people as to why some
      functions marked as "notrace" were listed, but when enabled they did
      nothing. This was because weak functions can still have fentry calls, and
      these addresses get added to the "available_filter_functions" file.
      kallsyms is what converts those addresses to names, and since the weak
      functions are not listed in kallsyms, it would just pick the function
      before that.
      
      To solve this, there was a trick to detect weak functions listed, and
      these records would be marked as DISABLED so that they do not get enabled
      and are mostly ignored. As the processing of the list of all functions to
      figure out what is weak or not can take a long time, this process is put
      off into a kernel thread and run in parallel with the rest of start up.
      
      Now the issue happens whet function tracing is enabled via the kernel
      command line. As it starts very early in boot up, it can be enabled before
      the records that are weak are marked to be disabled. This causes an issue
      in the accounting, as the weak records are enabled by the command line
      function tracing, but after boot up, they are not disabled.
      
      The ftrace records have several accounting flags and a ref count. The
      DISABLED flag is just one. If the record is enabled before it is marked
      DISABLED it will get an ENABLED flag and also have its ref counter
      incremented. After it is marked for DISABLED, neither the ENABLED flag nor
      the ref counter is cleared. There's sanity checks on the records that are
      performed after an ftrace function is registered or unregistered, and this
      detected that there were records marked as ENABLED with ref counter that
      should not have been.
      
      Note, the module loading code uses the DISABLED flag as well to keep its
      functions from being modified while its being loaded and some of these
      flags may get set in this process. So changing the verification code to
      ignore DISABLED records is a no go, as it still needs to verify that the
      module records are working too.
      
      Also, the weak functions still are calling a trampoline. Even though they
      should never be called, it is dangerous to leave these weak functions
      calling a trampoline that is freed, so they should still be set back to
      nops.
      
      There's two places that need to not skip records that have the ENABLED
      and the DISABLED flags set. That is where the ftrace_ops is processed and
      sets the records ref counts, and then later when the function itself is to
      be updated, and the ENABLED flag gets removed. Add a helper function
      "skip_record()" that returns true if the record has the DISABLED flag set
      but not the ENABLED flag.
      
      Link: https://lkml.kernel.org/r/20221005003809.27d2b97b@gandalf.local.home
      
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: stable@vger.kernel.org
      Fixes: b39181f7 ("ftrace: Add FTRACE_MCOUNT_MAX_OFFSET to avoid adding weak function")
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      cf04f2d5
  3. 03 Oct, 2022 4 commits
  4. 29 Sep, 2022 12 commits
  5. 27 Sep, 2022 9 commits
  6. 26 Sep, 2022 7 commits