1. 11 Feb, 2021 1 commit
    • Steven Rostedt (VMware)'s avatar
      tracing: Check length before giving out the filter buffer · b220c049
      Steven Rostedt (VMware) authored
      When filters are used by trace events, a page is allocated on each CPU and
      used to copy the trace event fields to this page before writing to the ring
      buffer. The reason to use the filter and not write directly into the ring
      buffer is because a filter may discard the event and there's more overhead
      on discarding from the ring buffer than the extra copy.
      
      The problem here is that there is no check against the size being allocated
      when using this page. If an event asks for more than a page size while being
      filtered, it will get only a page, leading to the caller writing more that
      what was allocated.
      
      Check the length of the request, and if it is more than PAGE_SIZE minus the
      header default back to allocating from the ring buffer directly. The ring
      buffer may reject the event if its too big anyway, but it wont overflow.
      
      Link: https://lore.kernel.org/ath10k/1612839593-2308-1-git-send-email-wgong@codeaurora.org/
      
      Cc: stable@vger.kernel.org
      Fixes: 0fc1b09f ("tracing: Use temp buffer when filtering events")
      Reported-by: default avatarWen Gong <wgong@codeaurora.org>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      b220c049
  2. 05 Feb, 2021 1 commit
    • Steven Rostedt (VMware)'s avatar
      tracing: Do not count ftrace events in top level enable output · 256cfdd6
      Steven Rostedt (VMware) authored
      The file /sys/kernel/tracing/events/enable is used to enable all events by
      echoing in "1", or disabling all events when echoing in "0". To know if all
      events are enabled, disabled, or some are enabled but not all of them,
      cating the file should show either "1" (all enabled), "0" (all disabled), or
      "X" (some enabled but not all of them). This works the same as the "enable"
      files in the individule system directories (like tracing/events/sched/enable).
      
      But when all events are enabled, the top level "enable" file shows "X". The
      reason is that its checking the "ftrace" events, which are special events
      that only exist for their format files. These include the format for the
      function tracer events, that are enabled when the function tracer is
      enabled, but not by the "enable" file. The check includes these events,
      which will always be disabled, and even though all true events are enabled,
      the top level "enable" file will show "X" instead of "1".
      
      To fix this, have the check test the event's flags to see if it has the
      "IGNORE_ENABLE" flag set, and if so, not test it.
      
      Cc: stable@vger.kernel.org
      Fixes: 553552ce ("tracing: Combine event filter_active and enable into single flags field")
      Reported-by: default avatar"Yordan Karadzhov (VMware)" <y.karadz@gmail.com>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      256cfdd6
  3. 02 Feb, 2021 1 commit
  4. 29 Jan, 2021 4 commits
    • Wang ShaoBo's avatar
      kretprobe: Avoid re-registration of the same kretprobe earlier · 0188b878
      Wang ShaoBo authored
      Our system encountered a re-init error when re-registering same kretprobe,
      where the kretprobe_instance in rp->free_instances is illegally accessed
      after re-init.
      
      Implementation to avoid re-registration has been introduced for kprobe
      before, but lags for register_kretprobe(). We must check if kprobe has
      been re-registered before re-initializing kretprobe, otherwise it will
      destroy the data struct of kretprobe registered, which can lead to memory
      leak, system crash, also some unexpected behaviors.
      
      We use check_kprobe_rereg() to check if kprobe has been re-registered
      before running register_kretprobe()'s body, for giving a warning message
      and terminate registration process.
      
      Link: https://lkml.kernel.org/r/20210128124427.2031088-1-bobo.shaobowang@huawei.com
      
      Cc: stable@vger.kernel.org
      Fixes: 1f0ab409 ("kprobes: Prevent re-registration of the same kprobe")
      [ The above commit should have been done for kretprobes too ]
      Acked-by: default avatarNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Acked-by: default avatarAnanth N Mavinakayanahalli <ananth@linux.ibm.com>
      Acked-by: default avatarMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: default avatarWang ShaoBo <bobo.shaobowang@huawei.com>
      Signed-off-by: default avatarCheng Jian <cj.chengjian@huawei.com>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      0188b878
    • Masami Hiramatsu's avatar
      tracing/kprobe: Fix to support kretprobe events on unloaded modules · 97c753e6
      Masami Hiramatsu authored
      Fix kprobe_on_func_entry() returns error code instead of false so that
      register_kretprobe() can return an appropriate error code.
      
      append_trace_kprobe() expects the kprobe registration returns -ENOENT
      when the target symbol is not found, and it checks whether the target
      module is unloaded or not. If the target module doesn't exist, it
      defers to probe the target symbol until the module is loaded.
      
      However, since register_kretprobe() returns -EINVAL instead of -ENOENT
      in that case, it always fail on putting the kretprobe event on unloaded
      modules. e.g.
      
      Kprobe event:
      /sys/kernel/debug/tracing # echo p xfs:xfs_end_io >> kprobe_events
      [   16.515574] trace_kprobe: This probe might be able to register after target module is loaded. Continue.
      
      Kretprobe event: (p -> r)
      /sys/kernel/debug/tracing # echo r xfs:xfs_end_io >> kprobe_events
      sh: write error: Invalid argument
      /sys/kernel/debug/tracing # cat error_log
      [   41.122514] trace_kprobe: error: Failed to register probe event
        Command: r xfs:xfs_end_io
                   ^
      
      To fix this bug, change kprobe_on_func_entry() to detect symbol lookup
      failure and return -ENOENT in that case. Otherwise it returns -EINVAL
      or 0 (succeeded, given address is on the entry).
      
      Link: https://lkml.kernel.org/r/161176187132.1067016.8118042342894378981.stgit@devnote2
      
      Cc: stable@vger.kernel.org
      Fixes: 59158ec4 ("tracing/kprobes: Check the probe on unloaded module correctly")
      Reported-by: default avatarJianlin Lv <Jianlin.Lv@arm.com>
      Signed-off-by: default avatarMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      97c753e6
    • Viktor Rosendahl's avatar
      tracing: Use pause-on-trace with the latency tracers · da7f84cd
      Viktor Rosendahl authored
      Eaerlier, tracing was disabled when reading the trace file. This behavior
      was changed with:
      
      commit 06e0a548 ("tracing: Do not disable tracing when reading the
      trace file").
      
      This doesn't seem to work with the latency tracers.
      
      The above mentioned commit dit not only change the behavior but also added
      an option to emulate the old behavior. The idea with this patch is to
      enable this pause-on-trace option when the latency tracers are used.
      
      Link: https://lkml.kernel.org/r/20210119164344.37500-2-Viktor.Rosendahl@bmw.de
      
      Cc: stable@vger.kernel.org
      Fixes: 06e0a548 ("tracing: Do not disable tracing when reading the trace file")
      Signed-off-by: default avatarViktor Rosendahl <Viktor.Rosendahl@bmw.de>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      da7f84cd
    • Steven Rostedt (VMware)'s avatar
      fgraph: Initialize tracing_graph_pause at task creation · 7e0a9220
      Steven Rostedt (VMware) authored
      On some archs, the idle task can call into cpu_suspend(). The cpu_suspend()
      will disable or pause function graph tracing, as there's some paths in
      bringing down the CPU that can have issues with its return address being
      modified. The task_struct structure has a "tracing_graph_pause" atomic
      counter, that when set to something other than zero, the function graph
      tracer will not modify the return address.
      
      The problem is that the tracing_graph_pause counter is initialized when the
      function graph tracer is enabled. This can corrupt the counter for the idle
      task if it is suspended in these architectures.
      
         CPU 1				CPU 2
         -----				-----
        do_idle()
          cpu_suspend()
            pause_graph_tracing()
                task_struct->tracing_graph_pause++ (0 -> 1)
      
      				start_graph_tracing()
      				  for_each_online_cpu(cpu) {
      				    ftrace_graph_init_idle_task(cpu)
      				      task-struct->tracing_graph_pause = 0 (1 -> 0)
      
            unpause_graph_tracing()
                task_struct->tracing_graph_pause-- (0 -> -1)
      
      The above should have gone from 1 to zero, and enabled function graph
      tracing again. But instead, it is set to -1, which keeps it disabled.
      
      There's no reason that the field tracing_graph_pause on the task_struct can
      not be initialized at boot up.
      
      Cc: stable@vger.kernel.org
      Fixes: 380c4b14 ("tracing/function-graph-tracer: append the tracing_graph_flag")
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=211339
      Reported-by: pierre.gondois@arm.com
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      7e0a9220
  5. 25 Jan, 2021 1 commit
  6. 24 Jan, 2021 32 commits